• Title/Summary/Keyword: Information Continuum

Search Result 95, Processing Time 0.026 seconds

Molecular gas and star formation in early-type galaxies

  • Bureau, Martin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.36 no.2
    • /
    • pp.65-65
    • /
    • 2011
  • Early-type galaxies represent the end point of galaxy evolution and, despite pervasive residual star formation, are generally considered "red and dead", that is composed exclusively of old stars with no star formation. Here, their molecular gas content is constrained and discussed in relation to their evolution, supporting the continuing importance of minor mergers and/or cold gas accretion. First, as part of the Atlas3D survey, the first complete, large, volume-limited survey of CO in normal early-type galaxies is presented. At least of 23% of local early-types possess a substantial amount of molecular gas, the necessary ingredient for star formation, independent of mass and environment but dependent on the specific stellar angular momentum. Second, using CO synthesis imaging, the extent of the molecular gas is constrained and a variety of morphologies is revealed. The kinematics of the molecular gas and stars are often misaligned, implying an external gas origin in over a third of all systems, more than half in the field, while external gas accretion must be shot down in clusters. Third, many objects appear to be in the process of forming regular kpc-size decoupled disks, and a star formation sequence can be sketched by piecing together multi-wavelength information on the molecular gas, current star formation, and young stars. Fourth, early-type galaxies do not seem to systematically obey all our usual prejudices regarding star formation (e.g. Schmidt-Kennicutt law, far infrared-radio continuum correlation), suggesting a greater diversity in star formation processes than observed in disk galaxies and the possibility of "morphological quenching". Lastly, a first step toward constraining the physical properties of the molecular gas is taken, by modeling the line ratios of density- and opacity-sensitive molecules in a few objects. Taken together, these observations argue for the continuing importance of (minor) mergers and cold gas accretion in local early-types, and they provide a much greater understanding of the gas cycle in the galaxies harbouring most of the stellar mass. In the future, better dust masses and dust-to-gas mass ratios from Herschel should allow to place entirely independent constraints on the gas supply, while spatially-resolved high-density molecular gas tracers observed with ALMA will probe the interstellar medium and star formation laws locally in a regime entirely different from that normally probed in spiral galaxies.

  • PDF

HiMang: Highly Manageable Network and Service Architecture for New Generation

  • Choi, Tae-Sang;Lee, Tae-Ho;Kodirov, Nodir;Lee, Jae-Gi;Kim, Do-Yeon;Kang, Joon-Myung;Kim, Sung-Su;Strassner, John;Hong, James Won-Ki
    • Journal of Communications and Networks
    • /
    • v.13 no.6
    • /
    • pp.552-566
    • /
    • 2011
  • The Internet is a very successful modern technology and is considered to be one of the most important means of communication. Despite that success, fundamental architectural and business limitations exist in the Internet's design. Among these limitations, we focus on a specific issue, the lack of manageability, in this paper. Although it is generally understood that management is a significant and important part of network and service design, it has not been considered as an integral part in their design phase. We address this problem with our future Internet management architecture called highly manageable network and service architecture for new generation (HiMang), which is a novel architecture that aims at integrating management capabilities into network and service design. HiMang is highly manageable in the sense that it is autonomous, scalable, robust, and evolutionary while reducing the complexity of network management. Unlike any other management framework, HiMang provides management support for the revolutionary networks of the future while maintaining backward compatibility for existing networks.

Shape Design Sensitivity Analysis using Isogeometric Approach (CAD 형상을 활용한 설계 민감도 해석)

  • Ha, Seung-Hyun;Cho, Seon-Ho
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2007.04a
    • /
    • pp.577-582
    • /
    • 2007
  • A variational formulation for plane elasticity problems is derived based on an isogeometric approach. The isogeometric analysis is an emerging methodology such that the basis functions in analysis domain arc generated directly from NURBS (Non-Uniform Rational B-Splines) geometry. Thus. the solution space can be represented in terms of the same functions to represent the geometry. The coefficients of basis functions or the control variables play the role of degrees-of-freedom. Furthermore, due to h-. p-, and k-refinement schemes, the high order geometric features can be described exactly and easily without tedious re-meshing process. The isogeometric sensitivity analysis method enables us to analyze arbitrarily shaped structures without re-meshing. Also, it provides a precise construction method of finite element model to exactly represent geometry using B-spline base functions in CAD geometric modeling. To obtain precise shape sensitivity, the normal and curvature of boundary should be taken into account in the shape sensitivity expressions. However, in conventional finite element methods, the normal information is inaccurate and the curvature is generally missing due to the use of linear interpolation functions. A continuum-based adjoint sensitivity analysis method using the isogeometric approach is derived for the plane elasticity problems. The conventional shape optimization using the finite element method has some difficulties in the parameterization of boundary. In isogeometric analysis, however, the geometric properties arc already embedded in the B-spline shape functions and control points. The perturbation of control points in isogeometric analysis automatically results in shape changes. Using the conventional finite clement method, the inter-element continuity of the design space is not guaranteed so that the normal vector and curvature arc not accurate enough. On tile other hand, in isogeometric analysis, these values arc continuous over the whole design space so that accurate shape sensitivity can be obtained. Through numerical examples, the developed isogeometric sensitivity analysis method is verified to show excellent agreement with finite difference sensitivity.

  • PDF

Fabrication of Transient Absorption Spectroscopic System and Measurement of Transient Absorption Changes of DDI (순간흡수 분광학 측정장치 구성 및 DDI의 순간흡수율 변화 측정)

  • Seo, Jung-Chul;Lee, Min-Yung;Kim, Dong-Ho;Jeong, Hong-Sik;Park, Seung-Han;Kim, Ung
    • Korean Journal of Optics and Photonics
    • /
    • v.2 no.4
    • /
    • pp.209-213
    • /
    • 1991
  • Recently, the developments in generating and amplifying ultrashort optical pulses $(ps=10^{-12}s or fs=10^{-15}s)$ have imposed on great advances in the time-resolved laser spectroscopy. Especially, the transient absorption spectroscopy has a wide application range and the main idea of this technique is pump & probe method. After the pump pulse makes the material an excited or a transient states, the probe pulse is sent through the material to measure the absorbance change due to the transient states. Here, if the absorbance change was measured by the time delay between pump & probe pulses, the dynamic information of the excited or the transient states (the transient abnsorption changes by time & wavelength) can be obtained. At our laboratory, the ultrashort optic1 pulse (

  • PDF

Computer modeling of elastoplastic stress state of fibrous composites with hole

  • Polatov, Askhad M.;Ikramov, Akhmat M.;Khaldjigitov, Abduvali A.
    • Coupled systems mechanics
    • /
    • v.8 no.4
    • /
    • pp.299-313
    • /
    • 2019
  • The paper represents computer modeling of the deformed state of physically nonlinear transversally isotropic bodies with hole. In order to describe the anisotropy of the mechanical properties of transversally-isotropic materials a structurally phenomenological model has been used. This model allows representing the initial material in the form of the coupled isotropic materials: the basic material (binder) considered from the positions of continuum mechanics and the fiber material oriented along the anisotropy direction of the original material. It is assumed that the fibers perceive only the axial tensile-compression forces and are deformed together with the base material. To solve the problems of the theory of plasticity, simplified theories of small elastoplastic deformation have been used for a transversely-isotropic body, developed by B.E. Pobedrya. A simplified theory allows applying the theory of small elastoplastic deformations to solve specific applied problems, since in this case the fibrous medium is replaced by an equivalent transversely isotropic medium with effective mechanical parameters. The essence of simplification is that with simple stretching of composite in direction of the transversal isotropy axis and in direction perpendicular to it, plastic deformations do not arise. As a result, the intensity of stresses and deformations both along the principal axis of the transversal isotropy and along the perpendicular plane of isotropy is determined separately. The representation of the fibrous composite in the form of a homogeneous anisotropic material with effective mechanical parameters allows for a sufficiently accurate calculation of stresses and strains. The calculation is carried out under different loading conditions, keeping in mind that both sizes characterizing the fibrous material fiber thickness and the gap between the fibers-are several orders smaller than the radius of the hole. Based on the simplified theory and the finite element method, a computer model of nonlinear deformation of fibrous composites is constructed. For carrying out computational experiments, a specialized software package was developed. The effect of hole configuration on the distribution of deformation and stress fields in the vicinity of concentrators was investigated.

Experimental Techniques for Surface Science with Synchrotron Radiation

  • Jonhnson, R.L.;Bunk, O.;Falkenberg, G.;Kosuch, R.;Zeysing, J.
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 1998.02a
    • /
    • pp.17-17
    • /
    • 1998
  • Synchrotron radiation is produced when charged particles moving with relativistic velocities a are accelerated - for example, deflected by the bending magnets which guide the electron or p positrons in circular accelerators or storage rings. By using special focusing magnetic lattices i in the particle accelerators it is possible to make the dimensions of the particle beam very small with a hi맹 charge density which results in a light source with high b디lIiance. Synchrotron light h has important properties which make it ideal for a wide range of investigations in surface s science. The fact that the spectrum of electromagnetic radiation emitted in a bending magnet e extends in a continuum from the 얹r infra red region to hard x-rays means that it is id않I for a v variety of spectroscopic studies. Since there are no convenient lasers, or other really bright l light sources, in the vacuum ultraviolet and soft x-ray re.밍ons the development of synchrotron r radiation has enabled enormous advances to be made in this di펌C비t spectr따 re밍on. P Polarization-dependent measurements, for ex없nple ellipsometry or circular dichroism studies a are possible because the radiation has a well-defined polarization - linear in the plane of orbit w with additional right-circular, or left-circular, components for emission an생es above, or below, t the horizontal, respectively. Since the synchrotron light is emitted from a bunch of charge c circulating in a ring the light is emitted with a well-defined time structure with a short flash of l light every time a bunch passes an exit port. The time structure depends on the size of the ring a and the number and sequence of filling of the bunches. A pulsed light source enables time¬r resolved studies to be performed which provide direct information on the lifetimes and decay m modes of excited states and in addition opens up the possibility of using time of flight t techniques for spectroscopic studies. The fact that synchrotron radiation is produced in a clean u ultrahi야 vacuum environment is of gr않t importance for surce science studies. The current t비rd generation synchrotron light sources provide exceptionally high baliance and stability a and open up possibilities for experiments which would have been inconceivable only a short time ago.

  • PDF

Estimation and Mapping of Soil Organic Matter using Visible-Near Infrared Spectroscopy (분광학을 이용한 토양 유기물 추정 및 분포도 작성)

  • Choe, Eun-Young;Hong, Suk-Young;Kim, Yi-Hyun;Zhang, Yong-Seon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.43 no.6
    • /
    • pp.968-974
    • /
    • 2010
  • We assessed the feasibility of discrete wavelet transform (DWT) applied for the spectral processing to enhance the estimation performance quality of soil organic matters using visible-near infrared spectra and mapped their distribution via block Kriging model. Continuum-removal and $1^{st}$ derivative transform as well as Haar and Daubechies DWT were used to enhance spectral variation in terms of soil organic matter contents and those spectra were put into the PLSR (Partial Least Squares Regression) model. Estimation results using raw reflectance and transformed spectra showed similar quality with $R^2$ > 0.6 and RPD> 1.5. These values mean the approximation prediction on soil organic matter contents. The poor performance of estimation using DWT spectra might be caused by coarser approximation of DWT which not enough to express spectral variation based on soil organic matter contents. The distribution maps of soil organic matter were drawn via a spatial information model, Kriging. Organic contents of soil samples made Gaussian distribution centered at around 20 g $kg^{-1}$ and the values in the map were distributed with similar patterns. The estimated organic matter contents had similar distribution to the measured values even though some parts of estimated value map showed slightly higher. If the estimation quality is improved more, estimation model and mapping using spectroscopy may be applied in global soil mapping, soil classification, and remote sensing data analysis as a rapid and cost-effective method.

The Standardization on the Appraisal of Records: Analysis of the Appraisal Principles and Process in ISO 15489-1:2016 (기록 평가의 표준화 - ISO 15489 개정판에서의 평가 원리 및 절차 분석 -)

  • Kim, Myeong-Hun
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.18 no.4
    • /
    • pp.45-68
    • /
    • 2018
  • to apply standardized principles and methodologies to the appraisal of records because it is different for each country, region, and organization to distinguish what kind of records are important and how to select them. For this reason, numerous theories and methodologies have been proposed around the appraisal of records. ISO 15489-1:2016, on the other hand, has laid the groundwork for the standardization of the appraisal of records that are applicable globally in recent record management environments based on a new perspective on the appraisal of records. ISO 15489 presents the principles and methodology of records management that are consistent with the electronic record environment based on the record continuum theory, and the principles and methods on the appraisal of records presented in ISO 15489-1:2016 need to be analyzed in depth. This paper analyzes the concept and the logic of the appraisal of records presented in ISO 15489-1:2016 to find an improvement direction in accordance with the recent electronic record environment. For this purpose, it reviewed courses from the enactment of AS 4390 to the revision of ISO 15489 to understand the background of the appraisal principles of ISO 15489-1:2016. Based on this, the appraisal concepts and principles presented in ISO 15489-1:2016 were examined, and the process of the appraisal was analyzed.

Automatic Detection of Type II Solar Radio Burst by Using 1-D Convolution Neutral Network

  • Kyung-Suk Cho;Junyoung Kim;Rok-Soon Kim;Eunsu Park;Yuki Kubo;Kazumasa Iwai
    • Journal of The Korean Astronomical Society
    • /
    • v.56 no.2
    • /
    • pp.213-224
    • /
    • 2023
  • Type II solar radio bursts show frequency drifts from high to low over time. They have been known as a signature of coronal shock associated with Coronal Mass Ejections (CMEs) and/or flares, which cause an abrupt change in the space environment near the Earth (space weather). Therefore, early detection of type II bursts is important for forecasting of space weather. In this study, we develop a deep-learning (DL) model for the automatic detection of type II bursts. For this purpose, we adopted a 1-D Convolution Neutral Network (CNN) as it is well-suited for processing spatiotemporal information within the applied data set. We utilized a total of 286 radio burst spectrum images obtained by Hiraiso Radio Spectrograph (HiRAS) from 1991 and 2012, along with 231 spectrum images without the bursts from 2009 to 2015, to recognizes type II bursts. The burst types were labeled manually according to their spectra features in an answer table. Subsequently, we applied the 1-D CNN technique to the spectrum images using two filter windows with different size along time axis. To develop the DL model, we randomly selected 412 spectrum images (80%) for training and validation. The train history shows that both train and validation losses drop rapidly, while train and validation accuracies increased within approximately 100 epoches. For evaluation of the model's performance, we used 105 test images (20%) and employed a contingence table. It is found that false alarm ratio (FAR) and critical success index (CSI) were 0.14 and 0.83, respectively. Furthermore, we confirmed above result by adopting five-fold cross-validation method, in which we re-sampled five groups randomly. The estimated mean FAR and CSI of the five groups were 0.05 and 0.87, respectively. For experimental purposes, we applied our proposed model to 85 HiRAS type II radio bursts listed in the NGDC catalogue from 2009 to 2016 and 184 quiet (no bursts) spectrum images before and after the type II bursts. As a result, our model successfully detected 79 events (93%) of type II events. This results demonstrates, for the first time, that the 1-D CNN algorithm is useful for detecting type II bursts.

The Contact and Parallel Analysis of SPH Using Cartesian Coordinate Based Domain Decomposition Method (Cartesian 좌표기반 동적영역분할을 고려한 SPH의 충돌 및 병렬해석)

  • Moonho Tak
    • Journal of the Korean GEO-environmental Society
    • /
    • v.25 no.4
    • /
    • pp.13-20
    • /
    • 2024
  • In this paper, a parallel analysis algorithm for Smoothed Particle Hydrodynamics (SPH), one of the numerical methods for fluidic materials, is introduced. SPH, which is a meshless method, can represent the behavior of a continuum using a particle-based approach, but it demands substantial computational resources. Therefore, parallel analysis algorithms are essential for SPH simulations. The domain decomposition algorithm, which divides the computational domain into partitions to be independently analyzed, is the most representative method among parallel analysis algorithms. In Discrete Element Method (DEM) and Molecular Dynamics (MD), the Cartesian coordinate-based domain decomposition method is popularly used because it offers advantages in quickly and conveniently accessing particle positions. However, in SPH, it is important to share particle information among partitioned domains because SPH particles are defined based on information from nearby particles within the smoothing length. Additionally, maintaining CPU load balance is crucial. In this study, a highly parallel efficient algorithm is proposed to dynamically minimize the size of orthogonal domain partitions to prevent excess CPU utilization. The efficiency of the proposed method was validated through numerical analysis models. The parallel efficiency of the proposed method is evaluated for up to 30 CPUs for fluidic models, achieving 90% parallel efficiency for up to 28 physical cores.