• Title/Summary/Keyword: fundamental informatics

Search Result 87, Processing Time 0.021 seconds

OPTIMAL PORTFOLIO STRATEGIES WITH A LIABILITY AND RANDOM RISK: THE CASE OF DIFFERENT LENDING AND BORROWING RATES

  • Yang, Zhao-Jun;Huang, Li-Hong
    • Journal of applied mathematics & informatics
    • /
    • v.15 no.1_2
    • /
    • pp.109-126
    • /
    • 2004
  • This paper deals with two problems of optimal portfolio strategies in continuous time. The first one studies the optimal behavior of a firm who is forced to withdraw funds continuously at a fixed rate per unit time. The second one considers a firm that is faced with an uncontrollable stochastic cash flow, or random risk process. We assume the firm's income can be obtained only from the investment in two assets: a risky asset (e.g., stock) and a riskless asset (e.g., bond). Therefore, the firm's wealth follows a stochastic process. When the wealth is lower than certain legal level, the firm goes bankrupt. Thus how to invest is the fundamental problem of the firm in order to avoid bankruptcy. Under the case of different lending and borrowing rates, we obtain the optimal portfolio strategies for some reasonable objective functions that are the piecewise linear functions of the firm's current wealth and present some interesting proofs for the conclusions. The optimal policies are easy to be operated for any relevant investor.

A big picture view of precision nutrition: from reductionism to holism

  • Kwon, Oran
    • Journal of Nutrition and Health
    • /
    • v.52 no.1
    • /
    • pp.1-5
    • /
    • 2019
  • Purpose: This review describes the historical changes in nutrition philosophy from a reductionist to a holistic approach during the $20^{th}$ century. Also, the role and efficient use of a holistic approach to precision nutrition are discussed. Results: Over the past century, significant progress has been made in human nutrition research, unraveling fundamental mechanisms of single nutrients on single targets or pathways. This kind of a reductionist approach has helped to save populations from nutrient deficiency diseases and improve associated health outcomes in large parts of the world. However, a new set of nutrition problems, like obesity and diet-related chronic diseases, are growing each year worldwide, increasing the financial burden on the health care system. A linear cause-effect association between single nutrients and a single physiologic effect, is insufficient to solve the complex nutrition-health relationships. Research that involves a more holistic rather than reductionist approach is needed to tackle a new set of nutrition problems. Recent advances in technology, informatics, and statistical methods are enabling an understanding of the diversity of individuals and the complex interactions between foods and human bodies, leading to the concept of "precision nutrition." Conclusion: The emerging goal of precision nutrition is to provide tailored dietary advice for maintaining health and preventing obesity and diet-related chronic diseases. The parts are already being installed. To grab the complexity, reductionism and holism must be used interdependently.

Experimental development of the epigenomic library construction method to elucidate the epigenetic diversity and causal relationship between epigenome and transcriptome at a single-cell level

  • Park, Kyunghyuk;Jeon, Min Chul;Kim, Bokyung;Cha, Bukyoung;Kim, Jong-Il
    • Genomics & Informatics
    • /
    • v.20 no.1
    • /
    • pp.2.1-2.11
    • /
    • 2022
  • The method of single-cell RNA sequencing has been rapidly developed, and numerous experiments have been conducted over the past decade. Their results allow us to recognize various subpopulations and rare cell states in tissues, tumors, and immune systems that are previously unidentified, and guide us to understand fundamental biological processes that determine cell identity based on single-cell gene expression profiles. However, it is still challenging to understand the principle of comprehensive gene regulation that determines the cell fate only with transcriptome, a consequential output of the gene expression program. To elucidate the mechanisms related to the origin and maintenance of comprehensive single-cell transcriptome, we require a corresponding single-cell epigenome, which is a differentiated information of each cell with an identical genome. This review deals with the current development of single-cell epigenomic library construction methods, including multi-omics tools with crucial factors and additional requirements in the future focusing on DNA methylation, chromatin accessibility, and histone post-translational modifications. The study of cellular differentiation and the disease occurrence at a single-cell level has taken the first step with single-cell transcriptome and is now taking the next step with single-cell epigenome.

Overseas Research Trends Related to 'Research Ethics' Using LDA Topic Modeling

  • YANG, Woo-Ryeong;YANG, Hoe-Chang
    • Journal of Research and Publication Ethics
    • /
    • v.3 no.1
    • /
    • pp.7-11
    • /
    • 2022
  • Purpose: The purpose of this study is to derive clues about the development direction of research ethics and areas of interest which has recently become a social issue in Korea by confirming overseas research trends. Research design, data and methodology: We collected 2,760 articles in scienceON, which including 'research ethics' in their paper. For analysis, frequency analysis, word clouding, keyword association analysis, and LDA topic modeling were used. Results: It was confirmed that many of the papers were published in medical, bio, pharmaceutical, and nursing journals and its interest has been continuously increasing. From word frequency analysis, many words of medical fields such as health, clinical, and patient was confirmed. From topic modeling, 7 topics were extracted such as ethical policy development and human clinical ethics. Conclusions: We founded that overseas research trends on research ethics are related to basic aspects than Korea. This means that a fundamental approach to ethics and the application of strict standards can become the basis for cultivating an overall ethical awareness. Therefore, academic discussions on the application of strict standards for publishing ethics and conducting researches in various fields where community awareness and social consensus are necessary for overall ethical awareness.

Analyzation and Improvements of the Revised 2015 Education Curriculum for Information Science of Highschool: Focusing on Information Ethics and Multimedia (고등학교 정보과학의 2015 개정 교육과정에 대한 분석 및 개선 방안: 정보윤리와 멀티미디어를 중심으로)

  • Jeong, Seungdo;Cho, Jungwon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.8
    • /
    • pp.208-214
    • /
    • 2016
  • With the rising interest in intelligence information technology built on artificial intelligence and big data technologies, all countries in the world including advanced countries such as the United States, the United Kingdom, Japan and so on, have launched national investment programs in preparation for the fourth industrial revolution centered on the software industry. Our country belatedly recognized the importance of software and initiated the 2015 revised educational curriculum for elementary and secondary informatics subjects. This paper thoroughly analyzes the new educational curriculum for information science in high schools and, then, suggests improvements in the areas of information ethics and multimedia. The analysis of the information science curriculum is applied to over twenty science high schools and schools for gifted children, which are expected to play a leading role in scientific research in our country. In the future artificial intelligence era, in which our dependence on information technology will be further increased, information ethics education for talented students who will play the leading role in making and utilizing artificial intelligence systems should be strongly emphasized, and the focus of their education should be different from that of the existing system. Also, it is necessary that multimedia education centered on digital principles and compression techniques for images, sound, videos, etc., which are commonly used in real life, should be included in the 2015 revised educational curriculum. In this way, the goal of the 2015 revised educational curriculum can be achieved, which is to encourage innovation and the efficient resolution of problems in real life and diverse academic fields based on the fundamental concepts, principles and technology of computer science.

Determining Food Nutrition Information Preference Through Big Data Log Analysis (빅데이터 로그분석을 통한 식품영양정보 선호도 분석)

  • Hana Song;Hae-Jeung, Lee;Hunjoo Lee
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.5
    • /
    • pp.402-408
    • /
    • 2023
  • Consumer interest in food nutrition continues to grow; however, research on consumer preferences related to nutrition remains limited. In this study, big data analysis was conducted using keyword logs collected from the national information service, the Korean Food Composition Database (K-FCDB), to determine consumer preferences for foods of nutritional interest. The data collection period was set from January 2020 to December 2022, covering a total of 2,243,168 food name keywords searched by K-FCDB users. Food names were processed by merging them into representative food names. The search frequency of food names was analyzed for the entire period and by season using R. In the frequency analysis for the entire period, steamed rice, chicken, and egg were found to be the most frequently consumed foods by Koreans. Seasonal preference analysis revealed that in the spring and summer, foods without broth and cold dishes were consumed frequently, whereas in fall and winter, foods with broth and warm dishes were more popular. Additionally, foods sold by restaurants as seasonal items, such as Naengmyeon and Kongguksu, also exhibited seasonal variations in frequency. These results provide insights into consumer interest patterns in the nutritional information of commonly consumed foods and are expected to serve as fundamental data for formulating seasonal marketing strategies in the restaurant industry, given their indirect relevance to consumer trends.

The Prediction of the Expected Current Selection Coefficient of Single Nucleotide Polymorphism Associated with Holstein Milk Yield, Fat and Protein Contents

  • Lee, Young-Sup;Shin, Donghyun;Lee, Wonseok;Taye, Mengistie;Cho, Kwanghyun;Park, Kyoung-Do;Kim, Heebal
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.29 no.1
    • /
    • pp.36-42
    • /
    • 2016
  • Milk-related traits (milk yield, fat and protein) have been crucial to selection of Holstein. It is essential to find the current selection trends of Holstein. Despite this, uncovering the current trends of selection have been ignored in previous studies. We suggest a new formula to detect the current selection trends based on single nucleotide polymorphisms (SNP). This suggestion is based on the best linear unbiased prediction (BLUP) and the Fisher's fundamental theorem of natural selection both of which are trait-dependent. Fisher's theorem links the additive genetic variance to the selection coefficient. For Holstein milk production traits, we estimated the additive genetic variance using SNP effect from BLUP and selection coefficients based on genetic variance to search highly selective SNPs. Through these processes, we identified significantly selective SNPs. The number of genes containing highly selective SNPs with p-value <0.01 (nearly top 1% SNPs) in all traits and p-value <0.001 (nearly top 0.1%) in any traits was 14. They are phosphodiesterase 4B (PDE4B), serine/threonine kinase 40 (STK40), collagen, type XI, alpha 1 (COL11A1), ephrin-A1 (EFNA1), netrin 4 (NTN4), neuron specific gene family member 1 (NSG1), estrogen receptor 1 (ESR1), neurexin 3 (NRXN3), spectrin, beta, non-erythrocytic 1 (SPTBN1), ADP-ribosylation factor interacting protein 1 (ARFIP1), mutL homolog 1 (MLH1), transmembrane channel-like 7 (TMC7), carboxypeptidase X, member 2 (CPXM2) and ADAM metallopeptidase domain 12 (ADAM12). These genes may be important for future artificial selection trends. Also, we found that the SNP effect predicted from BLUP was the key factor to determine the expected current selection coefficient of SNP. Under Hardy-Weinberg equilibrium of SNP markers in current generation, the selection coefficient is equivalent to $2^*SNP$ effect.

Seismic motions in a non-homogeneous soil deposit with tunnels by a hybrid computational technique

  • Manolis, G.D.;Makra, Konstantia;Dineva, Petia S.;Rangelov, Tsviatko V.
    • Earthquakes and Structures
    • /
    • v.5 no.2
    • /
    • pp.161-205
    • /
    • 2013
  • We study seismically induced, anti-plane strain wave motion in a non-homogeneous geological region containing tunnels. Two different scenarios are considered: (a) The first models two tunnels in a finite geological region embedded within a laterally inhomogeneous, layered geological profile containing a seismic source. For this case, labelled as the first boundary-value problem (BVP 1), an efficient hybrid technique comprising the finite difference method (FDM) and the boundary element method (BEM) is developed and applied. Since the later method is based on the frequency-dependent fundamental solution of elastodynamics, the hybrid technique is defined in the frequency domain. Then, an inverse fast Fourier transformation (FFT) is used to recover time histories; (b) The second models a finite region with two tunnels, is embedded in a homogeneous half-plane, and is subjected to incident, time-harmonic SH-waves. This case, labelled as the second boundary-value problem (BVP 2), considers complex soil properties such as anisotropy, continuous inhomogeneity and poroelasticity. The computational approach is now the BEM alone, since solution of the surrounding half plane by the FDM is unnecessary. In sum, the hybrid FDM-BEM technique is able to quantify dependence of the signals that develop at the free surface to the following key parameters: seismic source properties and heterogeneous structure of the wave path (the FDM component) and near-surface geological deposits containing discontinuities in the form of tunnels (the BEM component). Finally, the hybrid technique is used for evaluating the seismic wave field that develops within a key geological cross-section of the Metro construction project in Thessaloniki, Greece, which includes the important Roman-era historical monument of Rotunda dating from the 3rd century A.D.

Use of Graph Database for the Integration of Heterogeneous Biological Data

  • Yoon, Byoung-Ha;Kim, Seon-Kyu;Kim, Seon-Young
    • Genomics & Informatics
    • /
    • v.15 no.1
    • /
    • pp.19-27
    • /
    • 2017
  • Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

Effect of Combining Multiple CNV Defining Algorithms on the Reliability of CNV Calls from SNP Genotyping Data

  • Kim, Soon-Young;Kim, Ji-Hong;Chung, Yeun-Jun
    • Genomics & Informatics
    • /
    • v.10 no.3
    • /
    • pp.194-199
    • /
    • 2012
  • In addition to single-nucleotide polymorphisms (SNP), copy number variation (CNV) is a major component of human genetic diversity. Among many whole-genome analysis platforms, SNP arrays have been commonly used for genomewide CNV discovery. Recently, a number of CNV defining algorithms from SNP genotyping data have been developed; however, due to the fundamental limitation of SNP genotyping data for the measurement of signal intensity, there are still concerns regarding the possibility of false discovery or low sensitivity for detecting CNVs. In this study, we aimed to verify the effect of combining multiple CNV calling algorithms and set up the most reliable pipeline for CNV calling with Affymetrix Genomewide SNP 5.0 data. For this purpose, we selected the 3 most commonly used algorithms for CNV segmentation from SNP genotyping data, PennCNV, QuantiSNP; and BirdSuite. After defining the CNV loci using the 3 different algorithms, we assessed how many of them overlapped with each other, and we also validated the CNVs by genomic quantitative PCR. Through this analysis, we proposed that for reliable CNV-based genomewide association study using SNP array data, CNV calls must be performed with at least 3 different algorithms and that the CNVs consistently called from more than 2 algorithms must be used for association analysis, because they are more reliable than the CNVs called from a single algorithm. Our result will be helpful to set up the CNV analysis protocols for Affymetrix Genomewide SNP 5.0 genotyping data.