• Title/Summary/Keyword: I-graph

Search Result 327, Processing Time 0.026 seconds

Winkler Springs (p-y curves) for pile design from stress-strain of soils: FE assessment of scaling coefficients using the Mobilized Strength Design concept

  • Bouzid, Dj. Amar;Bhattacharya, S.;Dash, S.R.
    • Geomechanics and Engineering
    • /
    • v.5 no.5
    • /
    • pp.379-399
    • /
    • 2013
  • In practice, analysis of laterally loaded piles is carried out using beams on non-linear Winkler springs model (often known as p-y method) due to its simplicity, low computational cost and the ability to model layered soils. In this approach, soil-pile interaction along the depth is characterized by a set of discrete non-linear springs represented by p-y curves where p is the pressure on the soil that causes a relative deformation of y. p-y curves are usually constructed based on semi-empirical correlations. In order to construct API/DNV proposed p-y curve for clay, one needs two values from the monotonic stress-strain test results i.e., undrained strength ($s_u$) and the strain at 50% yield stress (${\varepsilon}_{50}$). This approach may ignore various features for a particular soil which may lead to un-conservative or over-conservative design as not all the data points in the stress-strain relation are used. However, with the increasing ability to simulate soil-structure interaction problems using highly developed computers, the trend has shifted towards a more theoretically sound basis. In this paper, principles of Mobilized Strength Design (MSD) concept is used to construct a continuous p-y curves from experimentally obtained stress-strain relationship of the soil. In the method, the stress-strain graph is scaled by two coefficient $N_C$ (for stress) and $M_C$ (for strain) to obtain the p-y curves. $M_C$ and $N_C$ are derived based on Semi-Analytical Finite Element approach exploiting the axial symmetry where a pile is modelled as a series of embedded discs. An example is considered to show the application of the methodology.

A Method for Microarray Data Analysis based on Bayesian Networks using an Efficient Structural learning Algorithm and Data Dimensionality Reduction (효율적 구조 학습 알고리즘과 데이타 차원축소를 통한 베이지안망 기반의 마이크로어레이 데이타 분석법)

  • 황규백;장정호;장병탁
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.775-784
    • /
    • 2002
  • Microarray data, obtained from DNA chip technologies, is the measurement of the expression level of thousands of genes in cells or tissues. It is used for gene function prediction or cancer diagnosis based on gene expression patterns. Among diverse methods for data analysis, the Bayesian network represents the relationships among data attributes in the form of a graph structure. This property enables us to discover various relations among genes and the characteristics of the tissue (e.g., the cancer type) through microarray data analysis. However, most of the present microarray data sets are so sparse that it is difficult to apply general analysis methods, including Bayesian networks, directly. In this paper, we harness an efficient structural learning algorithm and data dimensionality reduction in order to analyze microarray data using Bayesian networks. The proposed method was applied to the analysis of real microarray data, i.e., the NC160 data set. And its usefulness was evaluated based on the accuracy of the teamed Bayesian networks on representing the known biological facts.

Field emission properties of boron-doped diamond film (보론-도핑된 다이아몬드 박막의 전계방출 특성)

  • 강은아;최병구;노승정
    • Journal of the Korean Vacuum Society
    • /
    • v.9 no.2
    • /
    • pp.110-115
    • /
    • 2000
  • Deposition conditions of diamond thin films were optimized using hot-filament chemical vapor deposition (HFCVD). Boron-doped diamond thin films with varying boron densities were then fabricated using B4C solid pellets. Current-voltage responses and field emission currents were measured to test the characteristics of field emission display (FED). With the increase of boron doping, the crystal size of diamond decreased slightly, but its quality was not changed significantly in case of small doping. The I-V characterization was performed for Al/diamond/p-Si, and the current of doped diamond film was increased $10^4\sim10^5$ times as compared with that of undoped film. In the field emission properties, the electrons were emitted with low electric field with the increase of doping, while the emission current increased. The onset-field of electron emission was 15.5 V/$\mu\textrm{m}$ for 2 pellets, 13.6 V/$\mu\textrm{m}$ for 3 pellets and 11.1 V/$\mu\textrm{m}$ for 4 pellets. With the incorporation of boron, the slope of Fowler-Nordheim graph was decreased, revealing that the electron emission behavior was improved with the decrease of the effective barrier energy.

  • PDF

Designing Distributed Real-Time Systems with Decomposition of End-to-End Timing Donstraints (양극단 지연시간의 분할을 이용한 분산 실시간 시스템의 설계)

  • Hong, Seong-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.3 no.5
    • /
    • pp.542-554
    • /
    • 1997
  • In this paper, we present a resource conscious approach to designing distributed real-time systems as an extension of our original approach [8][9] which was limited to single processor systems. Starting from a given task graph and a set of end-to-end constraints, we automatically generate task attributes (e.g., periods and deadlines) such that (i) the task set is schedulable, and (ii) the end-to-end timing constraints are satisfied. The method works by first transforming the end-to-end timing constraints into a set of intermediate constraints on task attributes, and then solving the intermediate constraints. The complexity of constraint solving is tackled by reducing the problem into relatively tractable parts, and then solving each sub-problem using heuristics to enhance schedulability. In this paper, we build on our single processor solution and show how it can be extended for distributed systems. The extension to distributed systems reveals many interesting sub-problems, solutions to which are presented in this paper. The main challenges arise from end-to-end propagation delay constraints, and therefore this paper focuses on our solutions for such constraints. We begin with extending our communication scheme to provide tight delay bounds across a network, while hiding the low-level details of network communication. We also develop an algorithm to decompose end-to-end bounds into local bounds on each processor of making extensive use of relative load on each processor. This results in significant decoupling of constraints on each processor, without losing its capability to find a schedulable solution. Finally, we show, how each of these parts fit into our overall methodology, using our previous results for single processor systems.

  • PDF

Development of an algorithm for Detecting Symptom level in patients with Scleroderma

  • Jeong, Jin-Hyeong;Lee, Ki-Young;Kim, Min-yeong;Kim, Nam-Sun;Lee, Sang-Sik
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.5
    • /
    • pp.367-372
    • /
    • 2015
  • In this study, locality of scleroderma was detected. Diagnostic method is difficult for scleroderma (skin curing; Scleroderma), and it is done by comparing the images of the normal subjects to the scleroderma patients, after performing monochrome processing. The saturation, brightness, and contrast are adjusted, and they were converted by using the process of Well Filter. As a result, the images were able to be used to clearly distinguish the symptoms of scleroderma. In addition, in a video of a healthy person, the line of sight of the observation given the image of scleroderma patients above sea level of height as $0^{\circ}$ is to implement the closing process to the rear Well Filter even only in so that the horizontal plane, and out at intervals of graph the amplitude difference of the video have I asked. The diagnostic criteria were determined for the healthy subjects and the scleroderma patients.

Gene annotation by the "interactome"analysis in KEGG

  • Kanehisa, Minoru
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2000.11a
    • /
    • pp.56-58
    • /
    • 2000
  • Post-genomics may be defined in different ways depending on how one views the challenges after the genome. A popular view is to follow the concept of the central dogma in molecular biology, namely from genome to transcriptome to proteome. Projects are going on to analyze gene expression profiles both at the mRNA and protein levels and to catalog protein 3D structure families, which will no doubt help the understanding of information in the genome. However complete, such catalogs of genes, RNAs, and proteins only tell us about the building blocks of life. They do not tell us much about the wiring (interaction) of building blocks, which is essential for uncovering systemic functional behaviors of the cell or the organism. Thus, an alternative view of post-genomics is to go up from the molecular level to the cellular level, and to understand, what I call, the "interactome"or a complete picture of molecular interactions in the cell. KEGG (http://www.genome.ad.jp/kegg/) is our attempt to computerize current knowledge on various cellular processes as a collection of "generalized"protein-protein interaction networks, to develop new graph-based algorithms for predicting such networks from the genome information, and to actually reconstruct the interactomes for all the completely sequenced genomes and some partial genomes. During the reconstruction process, it becomes readily apparent that certain pathways and molecular complexes are present or absent in each organism, indicating modular structures of the interactome. In addition, the reconstruction uncovers missing components in an otherwise complete pathway or complex, which may result from misannotation of the genome or misrepresentation of the KEGG pathway. When combined with additional experimental data on protein-protein interactions, such as by yeast two-hybrid systems, the reconstruction possibly uncovers unknown partners for a particular pathway or complex. Thus, the reconstruction is tightly coupled with the annotation of individual genes, which is maintained in the GENES database in KEGG. We are also trying to expand our literature surrey to include in the GENES database most up-to-date information about gene functions.

  • PDF

Development of MS Excel Macros to estimate regression models and test hypotheses of relationships between variables (Application to regression analysis of subway electric charges data) (MS Excel 함수들을 이용한 회귀 분석 모형 추정 및 관계 분석 검정을 위한 매크로 개발 (지하철 전기요금 자료 회귀분석에 응용))

  • Kim, Sook-Young
    • Journal of the Korea Computer Industry Society
    • /
    • v.10 no.5
    • /
    • pp.213-220
    • /
    • 2009
  • Regression analysis to estimate the fitted models and test hypotheses are basic statistical tools for survey data as well as experimental data. Data is collected as pairs of independent and dependent variables, and statistics are computed using matrix calculation. To estimate a best fitted model is a key to maximize reliability of regression analysis. To fit a regression model, plot data on XY axis and select the most fitted models. Researchers estimate the best model and test hypothesis with MS Excel's graph menu and matrix computation functions. In this study, I develop macros to estimate the fitted regression model and test hypotheses of relationship between variables. Subway electric charges data with one dependent variable and three independent variables are tested using developed macros, and compared with the results using built-in Excel of regression analysis.

  • PDF

Technology Valuation Method for Improving Its Reliability to Expand Technology Transfer Market (기술이전 활성화를 위한 사업화주체 발굴 전(前) 단계에서의 기술가치평가 신뢰성 제고방안에 관한 연구)

  • Oh, Dongchan;Lee, Jaesik;You, Wanghee;Kim, Seungkyo
    • Journal of Technology Innovation
    • /
    • v.22 no.3
    • /
    • pp.287-310
    • /
    • 2014
  • Due to the toughening intelligence-based competitions among nations, it has become important to industrialize R&D results. To industrialize R&D results through technology transfer, the transfer fee between a technology developer and a company should be discussed. While discussing the transfer fee, the results of technology valuation have been used. However, due to the unreliability of its result, the technology transfer market has not been expanded. In this paper, we discuss the improvement of the reliability of the technology valuation method to expand the technology transfer market. The proposed scheme provides graph-type technology valuation results according to various industrial scenarios using objective technology and market data. With the use of the proposed scheme, the technology developer and the consumer (i.e., the company) can determine the appropriate technology transfer fee. Thus, the proposed scheme is expected to contribute to the expansion of the technology transfer market.

Wireless Measurement of Human Motion Based on PDA (PDA기반 인체동작 무선계측)

  • Lee, Myong-Ho;Kim, Nam-Jin;Lee, Hwun-Jae;Jin, Gae-Whan;Lee, Sam-Ual;Lee, Jun-Hang;Lee, Sang-Bock;Lee, Tae-Soo
    • Journal of the Korean Society of Radiology
    • /
    • v.1 no.1
    • /
    • pp.39-44
    • /
    • 2007
  • In this study, wireless measurement technique for human motion was developed to monitor movement disorder patients during their daily life. MICA, TinyOS, and nesC, developed by UC Berkeley, were used as wireless sensor, its software platform, and programming language. The human motion data, generated by two axial accelerometer(ADXL202) was transmitted to PDA(iPaq3630) by 916Mhz short range communication chip(TR1000). It could be stored at PDA by simple Windows CE programming. To test the developed device, it was attached at human chest and the acquired data was shown as a graph during his motion of sitting, standing, and lying. The result showed that human motion could be logged without any hooking and constraints. Therefore, this device can be used to monitor patient's movement disorder and activity of daily life(ADL).

  • PDF

Integrative Analysis of Microarray Data with Gene Ontology to Select Perturbed Molecular Functions using Gene Ontology Functional Code

  • Kim, Chang-Sik;Choi, Ji-Won;Yoon, Suk-Joon
    • Genomics & Informatics
    • /
    • v.7 no.2
    • /
    • pp.122-130
    • /
    • 2009
  • A systems biology approach for the identification of perturbed molecular functions is required to understand the complex progressive disease such as breast cancer. In this study, we analyze the microarray data with Gene Ontology terms of molecular functions to select perturbed molecular functional modules in breast cancer tissues based on the definition of Gene ontology Functional Code. The Gene Ontology is three structured vocabularies describing genes and its products in terms of their associated biological processes, cellular components and molecular functions. The Gene Ontology is hierarchically classified as a directed acyclic graph. However, it is difficult to visualize Gene Ontology as a directed tree since a Gene Ontology term may have more than one parent by providing multiple paths from the root. Therefore, we applied the definition of Gene Ontology codes by defining one or more GO code(s) to each GO term to visualize the hierarchical classification of GO terms as a network. The selected molecular functions could be considered as perturbed molecular functional modules that putatively contributes to the progression of disease. We evaluated the method by analyzing microarray dataset of breast cancer tissues; i.e., normal and invasive breast cancer tissues. Based on the integration approach, we selected several interesting perturbed molecular functions that are implicated in the progression of breast cancers. Moreover, these selected molecular functions include several known breast cancer-related genes. It is concluded from this study that the present strategy is capable of selecting perturbed molecular functions that putatively play roles in the progression of diseases and provides an improved interpretability of GO terms based on the definition of Gene Ontology codes.