• Title/Summary/Keyword: 한국 수학

Search Result 10,180, Processing Time 0.036 seconds

Implementation of Evolving Neural Network Controller for Inverted Pendulum System (도립진자 시스템을 위한 진화형 신경회로망 제어기의 실현)

  • 심영진;김태우;최우진;이준탁
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.14 no.3
    • /
    • pp.68-76
    • /
    • 2000
  • The stabilization control of Inverted Pendulum(IP) system is difficult because of its nonlinearity and structural unstability. Futhermore, a series of conventional techniques such as the pole placement and the optimal control based on the local linearizations have narrow stabilizable regions. At the same time, the fine tunings of their gain parameters are also troublesome. Thus, in this paper, an Evolving Neural Network Controller(ENNC) which its structure and its connection weights are optimized simultaneously by Real Variable Elitist Genetic Algorithm(RVEGA) was presented for stabilization of an IP system with nonlinearity. This proposed ENNC was described by a simple genetic chromosome. And the deletion of neuron, the according to the various flag types. Therefore, the connection weights, its structure and the neuron types in the given ENNC can be optimized by the proposed evolution strategy. And the proposed ENNC was implemented successfully on the ADA-2310 data acquisition board and the 80586 microprocessor in order to stabilize the IP system. Through the simulation and experimental results, we showed that the finally acquired optimal ENNC was very useful in the stabilization control of IP system.

  • PDF

A Study on Ontology of Digital Photo Image Focused on a Simulacre Concept of Deleuze & Baudrillard (디지털 사진 이미지의 존재론에 관한 연구 -들뢰즈와 보드리야르의 시뮬라크르 개념을 중심으로)

  • Gwon, Oh-sang
    • Cartoon and Animation Studies
    • /
    • s.51
    • /
    • pp.391-411
    • /
    • 2018
  • The purpose of this thesis is to examine ontology of digital photo image based on a Simulacre concept of Gilles Deleuze & Jean Baudrillard. Traditionally, analog image follows the logic of reproduction with a similarity with original target. Therefore, visual reality of analog image is illuminated, interpreted, and described in a subjective viewpoint, but does not deviate from the interpreted reality. However, digital image does not exist physically but exists as information that is made of mathematical data, a digital algorithm. This digital image is that newness of every reproduction, that is, essence of subject 'once existing there' does not exist anymore, and does not instruct or reproduce an outside target. Therefore, digital image does not have the similarity and does not keep the index instruction ability anymore. It means that this digital image is converted into a virtual area, and this is not reproduction of already existing but display of not existing yet. This not-being of digital image changes understanding of reality, existence, and imagination. Now, dividing it into reality and imagination itself is meaningless, and this does not make digital image with technical improvement but is a new image that is basically completely different from existing image. Eventually, digital image of the day passes step to visualize an existent target, nonexistent things have been visualized, and reality operates virtually. It means that digital image does not reproduce our reality but reproduces other reality realistically. In other words, it is a virtual reproduction producing an image that is not related to a target, that is to say Simulacre. In the virtually simulated world, reality has an infinite possibility, and it is not a picture of the past and present and has a possibility as the infinite virtual that is not fixed, is infinitely mutable, and is not actualized yet.

An Intercomparison of Model Predictions for an Urban Contamination Resulting from the Explosion of a Radiological Dispersal Device (도심에서 방사능분산장치의 폭발로 인한 피폭선량 예측결과의 상호비교)

  • Hwang, Won-Tae;Jeong, Hyo-Jun;Kim, Eun-Han;Han, Moon-Hee
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.7 no.1
    • /
    • pp.39-47
    • /
    • 2009
  • The METRO-K is a model for a radiological dose assessment due to a radioactive contamination in the Korean urban environment. The model has been taken part in the Urban Remediation Working Group within the IAEA's (International Atomic Energy Agency) EMRAS (${\mathbf{\underline{E}}}nvironmental$ ${\mathbf{\underline{M}}}odeling$ for ${\mathbf{\underline{RA}}}diation$ ${\mathbf{\underline{S}}}afety$) program. The Working Croup designed for the intercomparison of radioactive contamination to be resulted from the explosion of a radiological dispersal device in a hypothetical city. This paper dealt intensively with a part among a lot of predictive results which had been performed in the EMRAS program. The predictive results of three different models (METRO-K, RESRAD-RDD, CPHR) were submitted to the Working Group. The gap of predictive results was due to the difference of mathemathical modeling approaches, parameter values, understanding of assessors. Even if final results (for example, dose rates from contamintaed surfaces which might affect to a receptor) are similar, the understanding on the contribution of contaminated surfaces showed a great difference. Judging from the authors, it is due to the lack of understanding and information on radioactive terrors as well as the social and cultural gaps which assessors have been experienced. Therefore, it can be known that the experience of assessors and their subjective judgements might be important factors to get reliable results. If the acquisition of a little additional information is possible, it was identified that the METRO-K might be a useful tool for decision support against contamination resulting from radioactive terrors by improving the existing model.

  • PDF

Error factors and uncertainty measurement for determinations of amino acid in beef bone extract (사골농축액 시료 중에 함유된 아미노산 정량분석에 대한 오차 요인 및 측정불확도 추정)

  • Kim, Young-Jun;Kim, Ji-Young;Jung, Min-Yu;Shin, Young-Jae
    • Analytical Science and Technology
    • /
    • v.26 no.2
    • /
    • pp.125-134
    • /
    • 2013
  • This study was demonstrated to estimate the measurement uncertainty of 23 multiple-component amino acids from beef bone extract by high performance liquid chromatography (HPLC). The sources of measurement uncertainty (i.e. sample weight, final volume, standard weight, purity, standard solution, calibration curve, recovery and repeatability) in associated with the analysis of amino acids were evaluated. The estimation of uncertainty obtained on the GUM (Guide to the expression of uncertainty in measurement) and EURACHEM document with mathematical calculation and statistical analysis. The content of total amino acids from beef bone extract was 36.18 g/100 g and the expanded uncertainty by multiplying coverage factor (k, 2.05~2.36) was 3.81 g/100 g at a 95% confidence level. The major contributors to the measurement uncertainty were identified in the order of recovery and repeatability (25.2%), sample pretreatment (24.5%), calibration-curve (24.0%) and weight of the reference material (10.4%). Therefore, more careful experiments are required in these steps to reduce uncertainties of amino acids analysis with a better personal proficiency improvement.

The Study on the Elaboration of Technology Valuation Model and the Adequacy of Volatility based on Real Options (실물옵션 기반 기술가치 평가모델 정교화와 변동성 유효구간에 관한 연구)

  • Sung, Tae-Eung;Lee, Jongtaik;Kim, Byunghoon;Jun, Seung-Pyo;Park, Hyun-Woo
    • Journal of Korea Technology Innovation Society
    • /
    • v.20 no.3
    • /
    • pp.732-753
    • /
    • 2017
  • Recently, when evaluating the technology values in the fields of biotechnology, pharmaceuticals and medicine, we have needed more to estimate those values in consideration of the period and cost for the commercialization to be put into in future. The existing discounted cash flow (DCF) method has limitations in that it can not consider consecutive investment or does not reflect the probabilistic property of commercialized input cost of technology-applied products. However, since the value of technology and investment should be considered as opportunity value and the information of decision-making for resource allocation should be taken into account, it is regarded desirable to apply the concept of real options, and in order to reflect the characteristics of business model for the target technology into the concept of volatility in terms of stock price which we usually apply to in evaluation of a firm's value, we need to consider 'the continuity of stock price (relatively minor change)' and 'positive condition'. Thus, as discussed in a lot of literature, it is necessary to investigate the relationship among volatility, underlying asset values, and cost of commercialization in the Black-Scholes model for estimating the technology value based on real options. This study is expected to provide more elaborated real options model, by mathematically deriving whether the ratio of the present value of the underlying asset to the present value of the commercialization cost, which reflects the uncertainty in the option pricing model (OPM), is divided into the "no action taken" (NAT) area under certain threshold conditions or not, and also presenting the estimation logic for option values according to the observation variables (or input values).

Case Study Analysis of Digital Education Design to Basic Concept Design Trend by Target of Education Needs in UK and Sweden (디지털 교육매체의 기초 컨셉디자인 동향 파악을 위한 선진국 사례 분석 - 영국과 스웨덴의 사용자 니즈를 중심으로 -)

  • Kim, Jung-Hee
    • Cartoon and Animation Studies
    • /
    • s.34
    • /
    • pp.345-366
    • /
    • 2014
  • From the beginning of Digital text book in 2007, there are many kinds of digital text book such as English, Science etc at Public education. Above many problems at the beginning like just using paper text book's scan data as digital text book, now use special contents and design for only digital text book. But only for digital text book not for other. There is gap between advanced country of education and us. This is research based on LG europe design center in London, UK target is UK, Sweden by heuristic analysis, question investigation to get Target's UX with digital education media. Advancement of digital and interest of education bring the world development of digital education device. UK, where is education advanced nation, is using lot's of digital education device which is interactive board, digital desk etc. Result of Analysis of Digital Education Design trend by Target of Education Needs apply rough Design by LG europe design center. We can get more sophisticated needs and UX result by target then Korea that can use for our future Digital education design plan. Also help to reduce gap between advanced country and Korea.

Complexity Metrics for Analysis Classes in the Unified Software Development Process (Unified Process의 분석 클래스에 대한 복잡도 척도)

  • 김유경;박재년
    • The KIPS Transactions:PartD
    • /
    • v.8D no.1
    • /
    • pp.71-80
    • /
    • 2001
  • Object-Oriented (OO) methodology to use the concept like encapsulation, inheritance, polymorphism, and message passing demands metrics that are different from structured methodology. There are many studies for OO software metrics such as program complexity or design metrics. But the metrics for the analysis class need to decrease the complexity in the analysis phase so that greatly reduce the effort and the cost of system development. In this paper, we propose new metrics to measure the complexity of analysis classes which draw out in the analysis phase based on Unified Process. By the collaboration complexity, is denoted by CC, we mean the maximum number of the collaborations can be achieved with each of the collaborator and detennine the potential complexity. And the interface complexity, is denoted by IC, shows the difficulty related to understand the interface of collaborators each other. We prove mathematically that the suggested metrics satisfy OO characteristics such as class size and inheritance. And we verify it theoretically for Weyuker' s nine properties. Moreover, we show the computation results for analysis classes of the system which automatically respond to questions of the it's user using the text mining technique. As we compared CC and IC to CBO and WMC, the complexity can be represented by CC and IC more than CBO and WMC. We expect to develop the cost-effective OO software by reviewing the complexity of analysis classes in the first stage of SDLC (Software Development Life Cycle).

  • PDF

Study of Multi Floor Plant Layout Optimization Based on Particle Swarm Optimization (PSO 최적화 기법을 이용한 다층 구조의 플랜트 배치에 관한 연구)

  • Park, Pyung Jae;Lee, Chang Jun
    • Korean Chemical Engineering Research
    • /
    • v.52 no.4
    • /
    • pp.475-480
    • /
    • 2014
  • In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines for connecting equipment. However, what is the lacking of considerations in previous researches is to handle the multi floor processes considering the safety distances for domino impacts on a complex plant. The mathematical programming formulation can be transformed into MILP (Mixed Integer Linear Programming) problems as considering safety distances, maintenance spaces, and economic benefits for solving the multi-floor plant layout problem. The objective function of this problem is to minimize piping costs connecting facilities in the process. However, it is really hard to solve this problem due to complex unequality or equality constraints such as sufficient spaces for the maintenance and passages, meaning that there are many conditional statements in the objective function. Thus, it is impossible to solve this problem with conventional optimization solvers using the derivatives of objective function. In this study, the PSO (Particle Swarm Optimization) technique, which is one of the representative sampling approaches, is employed to find the optimal solution considering various constraints. The EO (Ethylene Oxide) plant is illustrated to verify the efficacy of the proposed method.

Frequency-to-time Transformation by a Diffusion Expansion Method (분산 전개법에 의한 주파수-시간 영역 변환)

  • Cho, In-Ky;Kim, Rae-Yeong;Ko, Kwang-Beom;You, Young-June
    • Geophysics and Geophysical Exploration
    • /
    • v.17 no.3
    • /
    • pp.129-136
    • /
    • 2014
  • Electromagnetic (EM) methods are generally divided into frequency-domain EM (FDEM) and time-domain EM (TDEM) methods, depending on the source waveform. The FDEM and TDEM fields are mathematically related by the Fourier transformation, and the TDEM field can thus be obtained as the Fourier transformation of FDEM data. For modeling in time-domain, we can use fast frequency-domain modeling codes and then convert the results to the time domain with a suitable numerical method. Thus, frequency-to-time transformations are of interest to EM methods, which is generally attained through fast Fourier transform. However, faster frequency-to-time transformation is required for the 3D inversion of TDEM data or for the processing of vast air-borne TDEM data. The diffusion expansion method (DEM) is one of smart frequency-to-time transformation methods. In DEM, the EM field is expanded into a sequence of diffusion functions with a known frequency dependence, but with unknown diffusion-times that must be chosen based on the data to be transformed. Especially, accuracy of DEM is sensitive to the diffusion-time. In this study, we developed a method to determine the optimum range of diffusion-time values, minimizing the RMS error of the frequency-domain data approximated by the diffusion expansion. We confirmed that this method produces accurate results over a wider time range for a homogeneous half-space and two-layered model.

Development of Moisture Content Prediction Model for Larix kaempferi Sawdust Using Near Infrared Spectroscopy (근적외선 분광분석법을 이용한 낙엽송 목분의 함수율 예측 모델 개발)

  • Chang, Yoon-Seong;Yang, Sang-Yun;Chung, Hyunwoo;Kang, Kyu-Young;Choi, Joon-Weon;Choi, In-Gyu;Yeo, Hwanmyeong
    • Journal of the Korean Wood Science and Technology
    • /
    • v.43 no.3
    • /
    • pp.304-310
    • /
    • 2015
  • The moisture content of sawdust must be measured accurately and controlled appropriately during storage and transportation because biological degradation could be caused by improper moisture. In this study, to measure the moisture contents of Larix kaempferi sawdust, the near-infrared reflectance spectra (Wavelength 1000-2400 nm) of sawdust were used as detection parameter. After acquiring the NIR reflection spectrum of specimens which were humidified at each relative humidity condition ($25^{\circ}C$, RH 30~99%), moisture content prediction model was developed using mathematical preprocessings (e.g. smoothing, standard normal variate) and partial least squares (PLS) analysis with the acquired spectrum data. High reliability of the MC regression model with NIR spectroscopy was verified by cross validation test ($R^2$ = 0.94, RMSEP = 1.544). The results of this study show that NIR spectroscopy could be used as a convenient and accurate method for the nondestructive determination of moisture content of sawdust, which could lead to optimize wood utilization.