• Title/Summary/Keyword: Computational system

Search Result 6,530, Processing Time 0.036 seconds

A Coupled-ART Neural Network Capable of Modularized Categorization of Patterns (복합 특징의 분리 처리를 위한 모듈화된 Coupled-ART 신경회로망)

  • 우용태;이남일;안광선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.10
    • /
    • pp.2028-2042
    • /
    • 1994
  • Properly defining signal and noise in a self-organizing system like ART(Adaptive Resonance Theory) neural network model raises a number of subtle issues. Pattern context must enter the definition so that input features, treated as irrelevant noise when they are embedded in a given input pattern, may be treated as informative signals when they are embedded in a different input pattern. The ATR automatically self-scales their computational units to embody context and learning dependent definitions of a signal and noise and there is no problem in categorizing input pattern that have features similar in nature. However, when we have imput patterns that have features that are different in size and nature, the use of only one vigilance parameter is not enough to differentiate a signal from noise for a good categorization. For example, if the value fo vigilance parameter is large, then noise may be processed as an informative signal and unnecessary categories are generated: and if the value of vigilance parameter is small, an informative signal may be ignored and treated as noise. Hence it is no easy to achieve a good pattern categorization. To overcome such problems, a Coupled-ART neural network capable of modularized categorization of patterns is proposed. The Coupled-ART has two layer of tightly coupled modules. the upper and the lower. The lower layer processes the global features of a pattern and the structural features, separately in parallel. The upper layer combines the categorized outputs from the lower layer and categorizes the combined output, Hence, due to the modularized categorization of patterns, the Coupled-ART classifies patterns more efficiently than the ART1 model.

  • PDF

Wintertime Extreme Storm Waves in the East Sea: Estimation of Extreme Storm Waves and Wave-Structure Interaction Study in the Fushiki Port, Toyama Bay (동해의 동계 극한 폭풍파랑: 토야마만 후시키항의 극한 폭풍파랑 추산 및 파랑 · 구조물 상호작용 연구)

  • Lee, Han Soo;Komaguchi, Tomoaki;Yamamoto, Atsushi;Hara, Masanori
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.25 no.5
    • /
    • pp.335-347
    • /
    • 2013
  • In February 2008, high storm waves due to a developed atmospheric low pressure system propagating from the west off Hokkaido, Japan, to the south and southwest throughout the East Sea (ES) caused extensive damages along the central coast of Japan and along the east coast of Korea. This study consists of two parts. In the first part, we estimate extreme storm wave characteristics in the Toyama Bay where heavy coastal damages occurred, using a non-hydrostatic meteorological model and a spectral wave model by considering the extreme conditions for two factors for wind wave growth, such as wind intensity and duration. The estimated extreme significant wave height and corresponding wave period were 6.78 m and 18.28 sec, respectively, at the Fushiki Toyama. In the second part, we perform numerical experiments on wave-structure interaction in the Fushiki Port, Toyama Bay, where the long North-Breakwater was heavily damaged by the storm waves in February 2008. The experiments are conducted using a non-linear shallow-water equation model with adaptive mesh refinement (AMR) and wet-dry scheme. The estimated extreme storm waves of 6.78 m and 18.28 sec are used for incident wave profile. The results show that the Fushiki Port would be overtopped and flooded by extreme storm waves if the North-Breakwater does not function properly after being damaged. Also the storm waves would overtop seawalls and sidewalls of the Manyou Pier behind the North-Breakwater. The results also depict that refined meshes by AMR method with wet-dry scheme applied capture the coastline and coastal structure well while keeping the computational load efficiently.

Study on the Heat Transfer Phenomenon around Underground Concrete Digesters for Bigas Production Systems (생물개스 발생시스템을 위한 지하매설콘크리트 다이제스터의 열전달에 관한 연구)

  • 김윤기;고재균
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.22 no.1
    • /
    • pp.53-66
    • /
    • 1980
  • The research work is concerned with the analytical and experimental studies on the heat transfer phenomenon around the underground concrete digester used for biogas production Systems. A mathematical and computational method was developed to estimate heat losses from underground cylindrical concrete digester used for biogas production systems. To test its feasibility and to evaluate thermal parameters of materials related, the method was applied to six physical model digesters. The cylindrical concrete digester was taken as a physical model, to which the model,atical model of heat balance can be applied. The mathematical model was transformed by means of finite element method and used to analyze temperature distribution with respect to several boundary conditions and design parameters. The design parameters of experimental digesters were selected as; three different sizes 40cm by 80cm, 80cm by 160cm and l00cm by 200cm in diameter and height; two different levels of insulation materials-plain concrete and vermiculite mixing in concrete; and two different types of installation-underground and half-exposed. In order to carry out a particular aim of this study, the liquid within the digester was substituted by water, and its temperature was controlled in five levels-35。 C, 30。 C, 25。 C, 20。C and 15。C; and the ambient air temperature and ground temperature were checked out of the system under natural winter climate conditions. The following results were drawn from the study. 1.The analytical method, by which the estimated values of temperature distribution around a cylindrical digester were obtained, was able to be generally accepted from the comparison of the estimated values with the measured. However, the difference between the estimated and measured temperature had a trend to be considerably increased when the ambient temperature was relatively low. This was mainly related variations of input parameters including the thermal conductivity of soil, applied to the numerical analysis. Consequently, the improvement of these input data for the simulated operation of the numerical analysis is expected as an approach to obtain better refined estimation. 2.The difference between estimated and measured heat losses was shown to have the similar trend to that of temperature distribution discussed above. 3.It was found that a map of isothermal lines drawn from the estimated temperature distribution was very useful for a general observation of the direction and rate of heat transfer within the boundary. From this analysis, it was interpreted that most of heat losses is passed through the triangular section bounded within 45 degrees toward the wall at the bottom edge of the digesten Therefore, any effective insulation should be considered within this region. 4.It was verified by experiment that heat loss per unit volume of liquid was reduced as the size of the digester became larger For instance, at the liquid temperature of 35˚ C, the heat loss per unit volume from the 0. 1m$^3$ digester was 1, 050 Kcal/hr m$^3$, while at for 1. 57m$^3$ digester was 150 Kcal/hr m$^3$. 5.In the light of insulation, the vermiculite concrete was consistently shown to be superior to the plain concrete. At the liquid temperature ranging from 15。 C to 350 C, the reduction of heat loss was ranged from 5% to 25% for the half-exposed digester, while from 10% to 28% for the fully underground digester. 6.In the comparison of heat loss between the half-exposed and underground digesters, the heat loss from the former was fr6m 1,6 to 2, 6 times as much as that from the latter. This leads to the evidence that the underground digester takes advantage of heat conservation during winter.

  • PDF

An Estimation of Price Elasticities of Import Demand and Export Supply Functions Derived from an Integrated Production Model (생산모형(生産模型)을 이용(利用)한 수출(輸出)·수입함수(輸入函數)의 가격탄성치(價格彈性値) 추정(推定))

  • Lee, Hong-gue
    • KDI Journal of Economic Policy
    • /
    • v.12 no.4
    • /
    • pp.47-69
    • /
    • 1990
  • Using an aggregator model, we look into the possibilities for substitution between Korea's exports, imports, domestic sales and domestic inputs (particularly labor), and substitution between disaggregated export and import components. Our approach heavily draws on an economy-wide GNP function that is similar to Samuelson's, modeling trade functions as derived from an integrated production system. Under the condition of homotheticity and weak separability, the GNP function would facilitate consistent aggregation that retains certain properties of the production structure. It would also be useful for a two-stage optimization process that enables us to obtain not only the net output price elasticities of the first-level aggregator functions, but also those of the second-level individual components of exports and imports. For the implementation of the model, we apply the Symmetric Generalized McFadden (SGM) function developed by Diewert and Wales to both stages of estimation. The first stage of the estimation procedure is to estimate the unit quantity equations of the second-level exports and imports that comprise four components each. The parameter estimates obtained in the first stage are utilized in the derivation of instrumental variables for the aggregate export and import prices being employed in the upper model. In the second stage, the net output supply equations derived from the GNP function are used in the estimation of the price elasticities of the first-level variables: exports, imports, domestic sales and labor. With these estimates in hand, we can come up with various elasticities of both the net output supply functions and the individual components of exports and imports. At the aggregate level (first-level), exports appear to be substitutable with domestic sales, while labor is complementary with imports. An increase in the price of exports would reduce the amount of the domestic sales supply, and a decrease in the wage rate would boost the demand for imports. On the other hand, labor and imports are complementary with exports and domestic sales in the input-output structure. At the disaggregate level (second-level), the price elasticities of the export and import components obtained indicate that both substitution and complement possibilities exist between them. Although these elasticities are interesting in their own right, they would be more usefully applied as inputs to the computational general equilibrium model.

  • PDF

Motion Analysis of Light Buoys Combined with 7 Nautical Mile Self-Contained Lantern (7마일 등명기를 결합한 경량화 등부표의 운동 해석)

  • Son, Bo-Hun;Ko, Seok-Won;Yang, Jae-Hyoung;Jeong, Se-Min
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.24 no.5
    • /
    • pp.628-636
    • /
    • 2018
  • Because large buoys are mainly made of steel, they are heavy and vulnerable to corrosion by sea water. This makes buoy installation and maintenance difficult. Moreover, vessel collision accidents with buoys and damage to vessels due to the material of buoys (e.g., steel) are reported every year. Recently, light buoys adopting eco-friendly and lightweight materials have come into the spotlight in order to solve the previously-mentioned problems. In Korea, a new lightweight buoy with a 7-Nautical Mile lantern adopting expanded polypropylene (EPP) and aluminum to create a buoyant body and tower structure, respectively, was developed in 2017. When these light buoys are operated in the ocean, the visibility and angle of light from the lantern installed on the light buoys changes, which may cause them to function improperly. Therefore, research on the performance of light buoys is needed since the weight distribution and motion characteristics of these new buoys differ from conventional models. In this study, stability estimation and motion analyses for newly-developed buoys under various environmental conditions considering a mooring line were carried out using ANSYS AQWA. Numerical simulations for the estimation of wind and current loads were performed using commercial CFD software, Siemens STAR-CCM+, to increase the accuracy of motion analysis. By comparing the estimated maximum significant motions of the light buoys, it was found that waves and currents were more influential in the motion of the buoys. And, the estimated motions of the buoys became larger as the sea state became worser, which might be the reason that the peak frequencies of the wave spectra got closer to those of the buoys.

Numerical and Experimental Study on the Coal Reaction in an Entrained Flow Gasifier (습식분류층 석탄가스화기 수치해석 및 실험적 연구)

  • Kim, Hey-Suk;Choi, Seung-Hee;Hwang, Min-Jung;Song, Woo-Young;Shin, Mi-Soo;Jang, Dong-Soon;Yun, Sang-June;Choi, Young-Chan;Lee, Gae-Goo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.2
    • /
    • pp.165-174
    • /
    • 2010
  • The numerical modeling of a coal gasification reaction occurring in an entrained flow coal gasifier is presented in this study. The purposes of this study are to develop a reliable evaluation method of coal gasifier not only for the basic design but also further system operation optimization using a CFD(Computational Fluid Dynamics) method. The coal gasification reaction consists of a series of reaction processes such as water evaporation, coal devolatilization, heterogeneous char reactions, and coal-off gaseous reaction in two-phase, turbulent and radiation participating media. Both numerical and experimental studies are made for the 1.0 ton/day entrained flow coal gasifier installed in the Korea Institute of Energy Research (KIER). The comprehensive computer program in this study is made basically using commercial CFD program by implementing several subroutines necessary for gasification process, which include Eddy-Breakup model together with the harmonic mean approach for turbulent reaction. Further Lagrangian approach in particle trajectory is adopted with the consideration of turbulent effect caused by the non-linearity of drag force, etc. The program developed is successfully evaluated against experimental data such as profiles of temperature and gaseous species concentration together with the cold gas efficiency. Further intensive investigation has been made in terms of the size distribution of pulverized coal particle, the slurry concentration, and the design parameters of gasifier. These parameters considered in this study are compared and evaluated each other through the calculated syngas production rate and cold gas efficiency, appearing to directly affect gasification performance. Considering the complexity of entrained coal gasification, even if the results of this study looks physically reasonable and consistent in parametric study, more efforts of elaborating modeling together with the systematic evaluation against experimental data are necessary for the development of an reliable design tool using CFD method.

Game Theoretic Optimization of Investment Portfolio Considering the Performance of Information Security Countermeasure (정보보호 대책의 성능을 고려한 투자 포트폴리오의 게임 이론적 최적화)

  • Lee, Sang-Hoon;Kim, Tae-Sung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.37-50
    • /
    • 2020
  • Information security has become an important issue in the world. Various information and communication technologies, such as the Internet of Things, big data, cloud, and artificial intelligence, are developing, and the need for information security is increasing. Although the necessity of information security is expanding according to the development of information and communication technology, interest in information security investment is insufficient. In general, measuring the effect of information security investment is difficult, so appropriate investment is not being practice, and organizations are decreasing their information security investment. In addition, since the types and specification of information security measures are diverse, it is difficult to compare and evaluate the information security countermeasures objectively, and there is a lack of decision-making methods about information security investment. To develop the organization, policies and decisions related to information security are essential, and measuring the effect of information security investment is necessary. Therefore, this study proposes a method of constructing an investment portfolio for information security measures using game theory and derives an optimal defence probability. Using the two-person game model, the information security manager and the attacker are assumed to be the game players, and the information security countermeasures and information security threats are assumed as the strategy of the players, respectively. A zero-sum game that the sum of the players' payoffs is zero is assumed, and we derive a solution of a mixed strategy game in which a strategy is selected according to probability distribution among strategies. In the real world, there are various types of information security threats exist, so multiple information security measures should be considered to maintain the appropriate information security level of information systems. We assume that the defence ratio of the information security countermeasures is known, and we derive the optimal solution of the mixed strategy game using linear programming. The contributions of this study are as follows. First, we conduct analysis using real performance data of information security measures. Information security managers of organizations can use the methodology suggested in this study to make practical decisions when establishing investment portfolio for information security countermeasures. Second, the investment weight of information security countermeasures is derived. Since we derive the weight of each information security measure, not just whether or not information security measures have been invested, it is easy to construct an information security investment portfolio in a situation where investment decisions need to be made in consideration of a number of information security countermeasures. Finally, it is possible to find the optimal defence probability after constructing an investment portfolio of information security countermeasures. The information security managers of organizations can measure the specific investment effect by drawing out information security countermeasures that fit the organization's information security investment budget. Also, numerical examples are presented and computational results are analyzed. Based on the performance of various information security countermeasures: Firewall, IPS, and Antivirus, data related to information security measures are collected to construct a portfolio of information security countermeasures. The defence ratio of the information security countermeasures is created using a uniform distribution, and a coverage of performance is derived based on the report of each information security countermeasure. According to numerical examples that considered Firewall, IPS, and Antivirus as information security countermeasures, the investment weights of Firewall, IPS, and Antivirus are optimized to 60.74%, 39.26%, and 0%, respectively. The result shows that the defence probability of the organization is maximized to 83.87%. When the methodology and examples of this study are used in practice, information security managers can consider various types of information security measures, and the appropriate investment level of each measure can be reflected in the organization's budget.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

CFD Simulation of Changesin NOX Distribution according to an Urban Renewal Project (CFD 모델을 이용한 도시 재정비 사업에 의한 NOX 분포 변화 모의)

  • Kim, Ji-Hyun;Kim, Yeon-Uk;Do, Heon-Seok;Kwak, Kyung-Hwan
    • Journal of Environmental Impact Assessment
    • /
    • v.30 no.3
    • /
    • pp.141-154
    • /
    • 2021
  • In this study, the effect of the restoration of Yaksa stream and the construction of an apartment complex by the urban renewal project in the Yaksa district of Chuncheon on air quality in the surrounding area was evaluated using computational fluid dynamics (CFD) model simulations. In orderto compare the impact of the project, wind and pollutant concentration fields were simulated using topographic data in 2011 and 2017, which stand for the periods before and after the urban renewal project, respectively. In the numerical experiments, the scenarios were set to analyze the effect of the construction of the apartment complex and the effect of stream restoration. Wind direction and wind speed data obtained from the Chuncheon Automated Synoptic Observing System (ASOS) were used as the inflow boundary conditions, and the simulation results were weighted according to the frequencies of the eight-directional inflow wind directions. The changes in wind speed and NOX concentration distribution according to the changes in building and terrain between scenarios were compared. As a result, the concentration of NOX emitted from the surrounding roads increased by the construction of the apartment complex, and the magnitude of the increase was reduced as the result of including the effect of stream restoration. The concentration of NOX decreased around the restored stream, while the concentration increased significantly around the constructed apartment complex. The increase in the concentration of NOX around the apartment complex was more pronounced in the place located in the rear of the wind direction to the apartment complex, and the effect remains up to the height of the building. In conclusion, it was confirmed that the relative arrangement of apartment complex construction and stream restoration in relation to the main wind direction of the target area was one of the major factors in determining the surrounding air quality.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.