• Title/Summary/Keyword: Hybrid Model

Search Result 2,565, Processing Time 0.038 seconds

Enhancement of Ozone and Carbon Monoxide Associated with Upper Cut-off Low during Springtime in East Asia

  • Moon, Yun-Seob;Drummond, James R.
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.26 no.5
    • /
    • pp.475-489
    • /
    • 2010
  • In order to verify the enhancement of ozone and carbon monoxide (CO) during springtime in East Asia, we investigated weather conditions and data from remote sensors, air quality models, and air quality monitors. These include the geopotential height archived from the final (FNL) meteorological field, the potential vorticity and the wind velocity simulated by the Meteorological Mesoscale Model 5 (MM5), the back trajectory estimated by the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model, the total column amount of ozone and the aerosol index retrieved from the Total Ozone Mapping Spectrometer (TOMS), the total column density of CO retrieved from the Measurement of Pollution in the Troposphere (MOPITT), and the concentration of ozone and CO simulated by the Model for Ozone and Related Chemical Tracers (MOZART). In particular, the total column density of CO, which mightoriginate from the combustion of fossil fuels and the burning of biomass in China, increased in East Asia during spring 2000. In addition, the enhancement of total column amounts of ozone and CO appeared to be associated with both the upper cut-off low near 500 hPa and the frontogenesis of a surface cyclone during a weak Asian dust event. At the same time, high concentrations of ozone and CO on the Earth's surface were shown at the Seoul air quality monitoring site, located at the surface frontogenesis in Korea. It was clear that the ozone was invaded by the downward stretched vortex anomalies, which included the ozone-rich airflow, during movement and development of the cut-off low, and then there was the catalytic photochemical reaction of ozone precursors on the Earth's surface during the day. In addition, air pollutants such as CO and aerosol were tracked along both the cyclone vortex and the strong westerly as shown at the back trajectory in Seoul and Busan, respectively. Consequently, the maxima of ozone and CO between the two areas showed up differently because of the time lag between those gases, including their catalytic photochemical reactions together with the invasion from the upper troposphere, as well as the path of their transport from China during the weak Asian dust event.

Product Data Interoperability based on Layered Reference Ontology (계층적 참조 온톨로지 기반의 제품정보 간 상호운용성 확보)

  • Seo, Won-Chul;Lee, Sun-Jae;Kim, Byung-In;Lee, Jae-Yeol;Kim, Kwang-Soo
    • The Journal of Society for e-Business Studies
    • /
    • v.11 no.3
    • /
    • pp.53-71
    • /
    • 2006
  • In order to cope with the rapidly changing product development environment, individual manufacturing enterprises are forced to collaborate with each other through establishing a virtual organization. In collaboration, designated organizations work together for mutual gain based on product data interoperability. However, product data interoperability is not fully facilitated due to semantic inconsistency among product data models of individual enterprises. In order to overcome the semantic inconsistency problem, this paper proposes a reference ontology, Reference Domain Ontology(RDO), and a methodology for product data interoperability with semantic consistency using RDO. RDO describes semantics of product data model and metamodel for all application domains in a virtual organization. Using RDO, application domains in a virtual organization can easily understand the product data models of others. RDO is agile and temporal such that it is created with the formation of a virtual organization, copes with changes of the organization, and disappears with the vanishment of the organization. RDO is built by a hybrid approach of top-down using a upper ontology and bottom-up based on the merging of ontologies of application domains in a virtual organization. With this methodology, every domain in a virtual organization can achieve product data model interoperability without model transformation.

  • PDF

Simulation of Mixing Behavior for Dredging Plume using Puff Model (퍼프모형을 이용한 준설플륨의 혼합거동 모의)

  • Kim, Young-Do;Park, Jae-Hyeon;Lee, Man-Soo
    • Journal of Korea Water Resources Association
    • /
    • v.42 no.10
    • /
    • pp.891-896
    • /
    • 2009
  • The puff models have been developed to simulate the advection-diffusion processes of dredging suspended solids, either alone or in combination with Eulerian models. Computational efficiency and accuracy are of prime importance in designing these hybrid approaches to simulate a pollutant discharge, and we characterize two relatively simple Lagrangian techniques in this regard: forward Gaussian puff tracking (FGPT), and backward Gaussian puff tracking (BGPT). FGPT and BGPT offer dramatic savings in computational expense, but their applicability is limited by accuracy concerns in the presence of spatially variable flow or diffusivity fields or complex no-flux or open boundary conditions. For long simulations, particle and/or puff methods can transition to an Eulerian model if appropriate, since the relative computational expense of Lagrangian methods increases with time for continuous sources. Although we focus on simple Lagrangian models that are not suitable to all environmental applications, many of the implementation and computational efficiency concerns outlined herein would also be relevant to using higher order particle and puff methods to extend the near field.

Surface Ozone Episode Due to Stratosphere-Troposphere Exchange and Free Troposphere-Boundary Layer Exchange in Busan During Asian Dust Events

  • Moon, Y.S.;Kim, Y.K.;K. Strong;Kim, S.H.;Lim, Y.K.;Oh, I.B.;Song, S.K.
    • Journal of Environmental Science International
    • /
    • v.11 no.5
    • /
    • pp.419-436
    • /
    • 2002
  • The current paper reports on the enhancement of O$_3$, CO, NO$_2$, and aerosols during the Asian dust event that occurred over Korea on 1 May 1999. To confirm the origin and net flux of the O$_3$, CO, NO$_2$, and aerosols, the meteorological parameters of the weather conditions were investigated using Mesoscale Meteorological Model 5(MM5) and the TOMS total ozone and aerosol index, the back trajectory was identified using the Hybrid Single-Particle Lagrangian Integrated Trajectory Model(HYSPLIT), and the ozone and ozone precursor concentrations were determined using the Urban Ashed Model(UAM). In the presence of sufficiently large concentrations of NO$\sub$x/, the oxidation of CO led to O$_3$ formation with OH, HO$_2$, NO, and NO$_2$ acting as catalysts. The sudden enhancement of O$_3$, CO, NO$_2$ and aerosols was also found to be associated with a deepening cut-off low connected with a surface cyclone and surface anticyclone located to the south of Korea during the Asian dust event. The wave pattern of the upper trough/cut-off low and total ozone level remained stationary when they came into contact with a surface cyclone during the Asian dust event. A typical example of a stratosphere-troposphere exchange(STE) of ozone was demonstrated by tropopause folding due to the jet stream. As such, the secondary maxima of ozone above 80 ppbv that occurred at night in Busan, Korea on 1 May 2001 were considered to result from vertical mixing and advection from a free troposphere-boundary layer exchange in connection with an STE in the upper troposphere. Whereas the sudden enhancement of ozone above 100 ppbv during the day was explained by the catalytic reaction of ozone precursors and transport of ozone from a slow-moving anticyclone area that included a high level of ozone and its precursors coming from China to the south of Korea. The aerosols identified in the free troposphere over Busan, Korea on 1 May 1999 originated from the Taklamakan and Gobi deserts across the Yellow River. In particular, the 1000m profile indicated that the source of the air parcels was from an anticyclone located to the south of Korea. The net flux due to the first invasion of ozone between 0000 LST and 0600 LST on 1 May 1999 agreed with the observed ground-based background concentration of ozone. From 0600 LST to 1200 LST, the net flux of the second invasion of ozone was twice as much as the day before. In this case, a change in the horizontal wind direction may have been responsible for the ozone increase.

The Effects of Declination and Curvature Weight in DEM (수치표고모형에서 경사와 곡률경중율의 영향)

  • Yang, In-Tae;Choi, Seung-Pil;Kwon, Hyun;Kim, Wook-Nam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.8 no.2
    • /
    • pp.45-51
    • /
    • 1990
  • DEM must have a high accuracy against the actual topographic model. A model which can compute heights responding to random plane position by using of the topographic data and interpolation must be constructed. Interpolation affected by the accuraccy of the observations included noise, which affected by the slop and curvature weight. Data smoothing is a method to reduce the noise. Average declination and area ratio are variable which result similarity in according to slope. But in local area, area ratio well shows a local change. This study try to classify the terrain by the declination to analysis the effects of the declination and curvature weights, and then to represent the most probable model. The result are following : In terrain classification by the slop, p16 and p24 were fitted in the plane surface fit p16 and S in the varying surface, and S and p24 in the irregular surface in classification by curvature, p24 and S were fitted in the plane or varying surface, and p16 in the irregular surface In case of hybrid, p16, p24 and S are fitted in the plane, varying and irregular surface respectively. Smoothing is the most effective in case of slope of 50 persentage and of curvature weight of 0.0015.

  • PDF

Construction Claims Prediction and Decision Awareness Framework using Artificial Neural Networks and Backward Optimization

  • Hosny, Ossama A.;Elbarkouky, Mohamed M.G.;Elhakeem, Ahmed
    • Journal of Construction Engineering and Project Management
    • /
    • v.5 no.1
    • /
    • pp.11-19
    • /
    • 2015
  • This paper presents optimized artificial neural networks (ANNs) claims prediction and decision awareness framework that guides owner organizations in their pre-bid construction project decisions to minimize claims. The framework is composed of two genetic optimization ANNs models: a Claims Impact Prediction Model (CIPM), and a Decision Awareness Model (DAM). The CIPM is composed of three separate ANNs that predict the cost and time impacts of the possible claims that may arise in a project. The models also predict the expected types of relationship between the owner and the contractor based on their behavioral and technical decisions during the bidding phase of the project. The framework is implemented using actual data from international projects in the Middle East and Egypt (projects owned by either public or private local organizations who hired international prime contractors to deliver the projects). Literature review, interviews with pertinent experts in the Middle East, and lessons learned from several international construction projects in Egypt determined the input decision variables of the CIPM. The ANNs training, which has been implemented in a spreadsheet environment, was optimized using genetic algorithm (GA). Different weights were assigned as variables to the different layers of each ANN and the total square error was used as the objective function to be minimized. Data was collected from thirty-two international construction projects in order to train and test the ANNs of the CIPM, which predicted cost overruns, schedule delays, and relationships between contracting parties. A genetic optimization backward analysis technique was then applied to develop the Decision Awareness Model (DAM). The DAM combined the three artificial neural networks of the CIPM to assist project owners in setting optimum values for their behavioral and technical decision variables. It implements an intelligent user-friendly input interface which helps project owners in visualizing the impact of their decisions on the project's total cost, original duration, and expected owner-contractor relationship. The framework presents a unique and transparent hybrid genetic algorithm-ANNs training and testing method. It has been implemented in a spreadsheet environment using MS Excel$^{(R)}$ and EVOLVERTM V.5.5. It provides projects' owners of a decision-support tool that raises their awareness regarding their pre-bid decisions for a construction project.

Optimal Reservour Operation for Flood Control Using a Hybrid Approach (Case Study: Chungju Multipurpose Reservoir in Korea) (복합 모델링 기법을 이용한 홍수시 저수지 최적 운영 (사례 연구 : 충주 다목적 저수지))

  • Lee, Han-Gu;Lee, Sang-Ho
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.6
    • /
    • pp.727-739
    • /
    • 1998
  • The main objectives o reservoir optimal operation can be described as follows : maximization of the benefits through optimal allocation of the limited water resources for various purpose; minimization of t도 costs by the flood damage in potential damaging regions and risk of dam failure, etc. through safe drainage of a bulky volume of excessive water by a proper reservoir operation. Reviewing the past research works related to reservoir operation, we can find that the study on the matter of the former has been extensively carried out in last decades rather than the matter of the latter. This study is focused on developing a methodology of optimal reservoir operation for flood control, and a case study is performed on the Chungju multipurpose reservoir in Korea. The final goal of the study is to establish a reservoir optimal operation system which can search optimal policy to compromise two conflicting objectives: downstream flood damage and dam safety-upstream flood damage. In order to reach the final goal of the study, the following items were studied : (1)validation of hydrological data using HYMOS: (2)establishment of a downstream flood routing model coupling a rainfall-runoff model and SOBEK system for 1-D hydrodynamic flood routing; (3)replication of a flood damage estimation model by a neural network; (4)development of an integrated reservoir optimization module for an optimal operation policy.

  • PDF

Improved Resource Allocation Model for Reducing Interference among Secondary Users in TV White Space for Broadband Services

  • Marco P. Mwaimu;Mike Majham;Ronoh Kennedy;Kisangiri Michael;Ramadhani Sinde
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.55-68
    • /
    • 2023
  • In recent years, the Television White Space (TVWS) has attracted the interest of many researchers due to its propagation characteristics obtainable between 470MHz and 790MHz spectrum bands. The plenty of unused channels in the TV spectrum allows the secondary users (SUs) to use the channels for broadband services especially in rural areas. However, when the number of SUs increases in the TVWS wireless network the aggregate interference also increases. Aggregate interferences are the combined harmful interferences that can include both co-channel and adjacent interferences. The aggregate interference on the side of Primary Users (PUs) has been extensively scrutinized. Therefore, resource allocation (power and spectrum) is crucial when designing the TVWS network to avoid interferences from Secondary Users (SUs) to PUs and among SUs themselves. This paper proposes a model to improve the resource allocation for reducing the aggregate interface among SUs for broadband services in rural areas. The proposed model uses joint power and spectrum hybrid Firefly algorithm (FA), Genetic algorithm (GA), and Particle Swarm Optimization algorithm (PSO) which is considered the Co-channel interference (CCI) and Adjacent Channel Interference (ACI). The algorithm is integrated with the admission control algorithm so that; there is a possibility to remove some of the SUs in the TVWS network whenever the SINR threshold for SUs and PU are not met. We considered the infeasible system whereby all SUs and PU may not be supported simultaneously. Therefore, we proposed a joint spectrum and power allocation with an admission control algorithm whose better complexity and performance than the ones which have been proposed in the existing algorithms in the literature. The performance of the proposed algorithm is compared using the metrics such as sum throughput, PU SINR, algorithm running time and SU SINR less than threshold and the results show that the PSOFAGA with ELGR admission control algorithm has best performance compared to GA, PSO, FA, and FAGAPSO algorithms.

The influence of composite resin restoration on the stress distribution of notch shaped noncarious cervical lesion A three dimensional finite element analysis study (복합레진 수복물이 쐐기형 비우식성 치경부 병소의 응력 분포에 미치는 영향에 관한 3차원 유한요소법적 연구)

  • Lee, Chae-Kyung;Park, Jeong-Kil;Kim, Hyeon-Cheol;Woo, Sung-Gwan;Kim, Kwang-Hoon;Son, Kwon;Hur, Bock
    • Restorative Dentistry and Endodontics
    • /
    • v.32 no.1
    • /
    • pp.69-79
    • /
    • 2007
  • The purpose of this study was to investigate the effects of composite resin restorations on the stress distribution of notch shaped noncarious cervical lesion using three-dimensional (3D) finite element analysis (FEA). Extracted maxillary second premolar was scanned serially with Micro-CT (SkyScan1072 ; SkyScan, Aartselaar, Belgium). The 3D images were processed by 3D-DOCTOR (Able Software Co., Lexington, MA, USA). ANSYS (Swanson Analysis Systems, Inc., Houston, USA) was used to mesh and analyze 3D FE model. Notch shaped cavity was filled with hybrid or flowable resin and each restoration was simulated with adhesive layer thickness ($40{\mu}m$) A static load of 500 N was applied on a point load condition at buccal cusp (loading A) and palatal cusp (loading B). The principal stresses in the lesion apex (internal line angle of cavity) and middle vertical wall were analyzed using ANSYS. The results were as follows 1. Under loading A, compressive stress is created in the unrestored and restored cavity. Under loading B, tensile stress is created. And the peak stress concentration is seen at near mesial corner of the cavity under each load condition. 2. Compared to the unrestored cavity, the principal stresses at the cemeto-enamel junction (CEJ) and internal line angle of the cavity were more reduced in the restored cavity on both load con ditions. 3. In teeth restored with hybrid composite, the principal stresses at the CEJ and internal line angle of the cavity were more reduced than flowable resin.

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.