• Title/Summary/Keyword: Basis sub-model

Search Result 171, Processing Time 0.026 seconds

A Spectrum Sharing Model for Compatibility between IMT-Advanced and Digital Broadcasting

  • Hassan, Walid A.;Rahman, Tharek Abd
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.9
    • /
    • pp.2073-2085
    • /
    • 2012
  • Recently, the International Telecommunication Union allocated the 470-862 MHz band to the digital broadcasting (DB) service. Moreover, the 790-862 MHz sub-band will be allocated to the next-generation mobile system, known as the International Mobile Telecommunication - Advanced (IMT-A), and to the DB on a co-primary basis in the year 2015. Currently, two candidate technologies are available to represent the IMT-A system; the Mobile WiMAX and Long Term Evolution - Advanced (LTE-A). One of the main criteria of the IMT-A candidate is to not cause additional interference to the primary service (i.e., DB). In this paper, we address the spectrum sharing issue between the IMT-A candidates and the DB service. More precisely, we investigate the interference effect between the DB service and the mobile network, which could be either LTE-A or WiMAX. Our study proposes a spectrum sharing model to take into account the impact of interference and evaluates the spectrum sharing requirements such as frequency separation and separation distance. This model considers three spectrum sharing scenarios: co-channel, zero guard band, and adjacent channel. A statistical analysis is performed, by considering the interferer spectrum emission mask and victim receiver blocking techniques. The interference-to-noise ratio is used as an essential spectrum sharing criterion between the systems. The model considers the random distribution of the users, antenna heights, and the bandwidth effect as well as the deployment environment in order to achieve spectrum sharing. The results show that LTE-A is preferable to WiMAX in terms of having less interference impact on DB; this can eventually allow the operation of both services without performance degradation and thus will lead to efficient utilization of the radio spectrum.

A Study on the Coping Experience of Mental Disorder Symptoms (정신장애인의 정신질환 증상 대처 경험에 관한 연구)

  • Kim, Nanghee;Song, Seung-yeon;Kim, Hyojung
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.5
    • /
    • pp.158-167
    • /
    • 2021
  • This work aims to explore personal coping and insight experience in mental disorder symptoms from the perspective of the parties in order to lay an empirical basis for the transition of the mental health service paradigm from a medical model to a human rights model. For this purpose, in-depth interviews with 8 persons with mental disorders were conducted and a model of practice was suggested through analysis using the grounded theory. As a result, 11 categories, 23 sub-categories and 132 concepts were identified. According to the analysis of this study, people with mental disorders have changed their perspective on symptoms through in-depth insight into their identities and symptoms as mental disorders, and discovered their own autonomous ways to cope with symptoms, managing their daily lives. Therefore, in developing a Korean alternative model for people with mental disorders, it is necessary to prepare conditions to find their own countermeasures through opportunities for insight.

A Study on Transfer Process Model for long-term preservation of Electronic Records (전자기록의 장기보존을 위한 이관절차모형에 관한 연구)

  • Cheon, kwon-ju
    • The Korean Journal of Archival Studies
    • /
    • no.16
    • /
    • pp.39-96
    • /
    • 2007
  • Traditionally, the concept of transfer is that physical records such as paper documents, videos, photos are made a delivery to Archives or Records centers on the basis of transfer guidelines. But, with the automation of records management environment and spreading new records creation and management applications, we can create records and manage them in the cyberspace. In these reasons, the existing transfer system is that we move filed records to Archives or Records centers by paper boxes, needs to be changed. Under the needing conditions of a new transfer paradigm, the fact that the revision of Records Act that include some provisions about electronic records management and transfer, is desirable and proper. Nevertheless, the electronic transfer provisions are too conceptional to apply records management practice, so we have to develop detailed methods and processes. In this context, this paper suggest that a electronic records transfer process model on the basis of international standard and foreign countries' cases. Doing transfer records is one of the records management courses to use valuable records in the future. So, both producer and archive have to transfer records itself and context information to long-term preservation repository according to the transfer guidelines. In the long run, transfer comes to be the conclusion that records are moved to archive by a formal transfer process with taking a proper records protection steps. To accomplish these purposes, I analyzed the 'OAIS Reference Model' and 'Producer-Archive Interface Methodology Abstract Standard-CCSDS Blue Book' which is made by CCSDS(Consultative committee for Space Data Systems). but from both the words of 'Reference Model' and 'Standard', we can understand that these standard are not suitable for applying business practice directly. To solve this problem, I also analyzed foreign countries' transfer cases. Through the analysis of theory and case, I suggest that an Electronic Records Transfer Process Model which is consist of five sub-process that are 'Ingest prepare ${\rightarrow}$ Ingest ${\rightarrow}$ Validation ${\rightarrow}$ Preservation ${\rightarrow}$ Archival storage' and each sub-process also have some transfer elements. Especially, to confirm the new process model's feasibility, after classifying two types - one is from Public Records center to Public Archive, the other is from Civil Records center to Public or Civil Archive - of Korean Transfer, I made the new Transfer Model applied to the two types of transfer cases.

Study on Governing Equations for Modeling Electrolytic Reduction Cell (전해환원 셀 모델링을 위한 지배 방정식 연구)

  • Kim, Ki-Sub;Park, Byung Heung
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.12 no.3
    • /
    • pp.245-251
    • /
    • 2014
  • Pyroprocess for treating spent nuclear fuels has been developed based on electrochemical principles. Process simulation is one of the important methods for process development and experimental data analysis and it is also a necessary approach for pyroprocessing. To date, process simulation of pyroprocessing has been focused on electrorefining and there have been not so many investigations on electrolytic reduction. Electrolytic reduction, unlike electrorefining, includes specific features of gas evolution and porous electrode and, thus, different equations should be considered for developing a model for the process. This study summarized required concepts and equations for electrolytic reduction model development from thermodynamic, mass transport, and reaction kinetics theories which are necessitated for analyzing an electrochemical cell. An electrolytic reduction cell was divided and equations for each section were listed and, then, boundary conditions for connecting the sections were indicated. It is expected that those equations would be used as a basis to develop a simulation model for the future and applied to determine parameters associated with experimental data.

Evaluation of the Thermal Margin in a KOFA-Loaded Core by a Multichannel Analysis Methodology (다수로해석 방법론에 의한 국산핵연료 노심 열적 여유도 평가)

  • D. H. Hwang;Y. J. Yoo;Park, J. R.;Kim, Y. J.
    • Nuclear Engineering and Technology
    • /
    • v.27 no.4
    • /
    • pp.518-531
    • /
    • 1995
  • A study has been Peformed to investigate the thermal margin increase by replacing the single-channel analysis model with a multichannel analysis model. h new critical heat flux(CHF) correlation, which is applicable to a 17$\times$17 Korean Fuel Assembly(KOFA)-loaded core, was developed on the basis of the local conditions predicted by the subchannel analysis code, TORC. The hot sub-channel analysis was carried out by using one-stage analysis methodology with a prescribed nodal layout of the core. The result of the analysis shooed that more than 5% of the thermal margin can be recovered by introducing the TORC/KRB-1 system(multichannel analysis model) instead of the PUMA/ERB-2 system(single-channel anal)sis model). The thermal margin increase was attributed not only to the effect of the local thermal hydraulic conditions in the hot subchannel predicted by the code, but also to the effect of the characteristics of the CHF correlation.

  • PDF

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Estimation of Atmospheric Deposition Velocities and Fluxes from Weather and Ambient Pollutant Concentration Conditions : Part I. Application of multi-layer dry deposition model to measurements at north central Florida site

  • Park, Jong-Kil;Eric R. Allen
    • Environmental Sciences Bulletin of The Korean Environmental Sciences Society
    • /
    • v.4 no.1
    • /
    • pp.31-42
    • /
    • 2000
  • The dry deposition velocities and fluxes of air pollutants such as SO2(g), O3(g), HNO3(g), sub-micron particulates, NO3(s), and SO42-(s) were estimated according to local meteorological elements in the atmospheric boundary layer. The model used for these calculations was the multiple layer resistance model developed by Hicks et al.1). The meteorological data were recorded on an hourly basis from July, 1990 to June, 1991 at the Austin Cary forest site, near Gainesville FL. Weekly integrated samples of ambient dry deposition species were collected at the site using triple-fiter packs. For the study period, the annual average dry deposition velocities at this site were estimated as 0.87$\pm$0.07 cm/s for SO2(g), 0.65$\pm$0.11 cm/s for O3(g), 1.20$\pm$0.14cm/s for HNO3(g), 0.0045$\pm$0.0006 cm/s for sub-micron particulates, and 0.089$\pm$0.014 cm/s for NO3-(s) and SO42-(s). The trends observed in the daily mean deposition velocities were largely seasonal, indicated by larger deposition velocities for the summer season and smaller deposition velocities for the winter season. The monthly and weekly averaged values for the deposition velocities did not show large differences over the year yet did show a tendency of increased deposition velocities in the summer and decreased values in the winter. The annual mean concentrations of the air pollutants obtained by the triple filter pack every 7 days were 3.63$\pm$1.92 $\mu\textrm{g}$/m3 for SO42-, 2.00$\pm$1.22 $\mu\textrm{g}$/m-3 for SO2, 1.30$\pm$0.59 $\mu\textrm{g}$/m-3 for HNO3, and 0.704$\pm$0.419 $\mu\textrm{g}$/m3 for NO3-, respectively. The air pollutant with the largest deposition flux was SO2 followed by HNO3, SO42-(S), and NO3-(S) in order of their magnitude. The sulfur dioxide and NO3- deposition fluxes were higher in the winter than in the summer, and the nitric acid and sulfate deposition fluxes were high during the spring and summer.

  • PDF

Optimizing Clustering and Predictive Modelling for 3-D Road Network Analysis Using Explainable AI

  • Rotsnarani Sethy;Soumya Ranjan Mahanta;Mrutyunjaya Panda
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.9
    • /
    • pp.30-40
    • /
    • 2024
  • Building an accurate 3-D spatial road network model has become an active area of research now-a-days that profess to be a new paradigm in developing Smart roads and intelligent transportation system (ITS) which will help the public and private road impresario for better road mobility and eco-routing so that better road traffic, less carbon emission and road safety may be ensured. Dealing with such a large scale 3-D road network data poses challenges in getting accurate elevation information of a road network to better estimate the CO2 emission and accurate routing for the vehicles in Internet of Vehicle (IoV) scenario. Clustering and regression techniques are found suitable in discovering the missing elevation information in 3-D spatial road network dataset for some points in the road network which is envisaged of helping the public a better eco-routing experience. Further, recently Explainable Artificial Intelligence (xAI) draws attention of the researchers to better interprete, transparent and comprehensible, thus enabling to design efficient choice based models choices depending upon users requirements. The 3-D road network dataset, comprising of spatial attributes (longitude, latitude, altitude) of North Jutland, Denmark, collected from publicly available UCI repositories is preprocessed through feature engineering and scaling to ensure optimal accuracy for clustering and regression tasks. K-Means clustering and regression using Support Vector Machine (SVM) with radial basis function (RBF) kernel are employed for 3-D road network analysis. Silhouette scores and number of clusters are chosen for measuring cluster quality whereas error metric such as MAE ( Mean Absolute Error) and RMSE (Root Mean Square Error) are considered for evaluating the regression method. To have better interpretability of the Clustering and regression models, SHAP (Shapley Additive Explanations), a powerful xAI technique is employed in this research. From extensive experiments , it is observed that SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions with an accuracy of 97.22% and strong performance metrics across all classes having MAE of 0.0346, and MSE of 0.0018. On the other hand, the ten-cluster setup, while faster in SHAP analysis, presented challenges in interpretability due to increased clustering complexity. Hence, K-Means clustering with K=4 and SVM hybrid models demonstrated superior performance and interpretability, highlighting the importance of careful cluster selection to balance model complexity and predictive accuracy.

Management of Knowledge Abstraction Hierarchy (지식 추상화 계층의 구축과 관리)

  • 허순영;문개현
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.23 no.2
    • /
    • pp.131-156
    • /
    • 1998
  • Cooperative query answering is a research effort to develop a fault-tolerant and intelligent database system using the semantic knowledge base constructed from the underlying database. Such knowledge base has two aspects of usage. One is supporting the cooperative query answering Process for providing both an exact answer and neighborhood information relevant to a query. The other is supporting ongoing maintenance of the knowledge base for accommodating the changes in the knowledge content and database usage purpose. Existing studies have mostly focused on the cooperative query answering process but paid little attention on the dynamic knowledge base maintenance. This paper proposes a multi-level knowledge representation framework called Knowledge Abstraction Hierarchy (KAH) that can not only support cooperative query answering but also permit dynamic knowledge maintenance. The KAH consists of two types of knowledge abstraction hierarchies. The value abstraction hierarchy is constructed by abstract values that are hierarchically derived from specific data values in the underlying database on the basis of generalization and specialization relationships. The domain abstraction hierarchy is built on the various domains of the data values and incorporates the classification relationship between super-domains and sub-domains. On the basis of the KAH, a knowledge abstraction database is constructed on the relational data model and accommodates diverse knowledge maintenance needs and flexibly facilitates cooperative query answering. In terms of the knowledge maintenance, database operations are discussed for the cases where either the internal contents for a given KAH change or the structures of the KAH itself change. In terms of cooperative query answering, database operations are discussed for both the generalization and specialization Processes, and the conceptual query handling. A prototype system has been implemented at KAIST that demonstrates the usefulness of KAH in ordinary database application systems.

  • PDF

A Case Study on the Application of Systems Engineering to the Development of PHWR Core Management Support System (시스템엔지니어링 기법을 적용한 가압중수로 노심관리 지원시스템 개발 사례)

  • Yeom, Choong Sub;Kim, Jin Il;Song, Young Man
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.9 no.1
    • /
    • pp.33-45
    • /
    • 2013
  • Systems Engineering Approach was applied to the development of operator-support core management system based on the on-site operation experience and document of core management procedures, which is for enhancing operability and safety in PHWR (Pressurized Heavy Water Reactor) operation. The dissertation and definition of the system were given on th basis of investigating and analyzing the core management procedures. Fuel management, detector calibration, safety management, core power distribution monitoring, and integrated data management were defined as main user's requirements. From the requirements, 11 upper functional requirements were extracted by considering the on-site operation experience and investigating documents of core management procedures. Detailed requirements of the system which were produced by analyzing the upper functional requirements were identified by interviewing members who have responsibility of the core management procedures, which were written in SRS (Software Requirement Specification) document by using IEEE 830 template. The system was designed on the basis of the SRS and analysis in terms of nuclear engineering, and then tested by simulation using on-site data as a example. A model of core power monitoring related to the core management was suggested and a standard process for the core management was also suggested. And extraction, analysis, and documentation of the requirements were suggested as a case in terms of systems engineering.