• Title/Summary/Keyword: high-throughput technologies

Search Result 138, Processing Time 0.035 seconds

CGHscape: A Software Framework for the Detection and Visualization of Copy Number Alterations

  • Jeong, Yong-Bok;Kim, Tae-Min;Chung, Yeun-Jun
    • Genomics & Informatics
    • /
    • v.6 no.3
    • /
    • pp.126-129
    • /
    • 2008
  • The robust identification and comprehensive profiling of copy number alterations (CNAs) is highly challenging. The amount of data obtained from high-throughput technologies such as array-based comparative genomic hybridization is often too large and it is required to develop a comprehensive and versatile tool for the detection and visualization of CNAs in a genome-wide scale. With this respective, we introduce a software framework, CGHscape that was originally developed to explore the CNAs for the study of copy number variation (CNV) or tumor biology. As a standalone program, CGHscape can be easily installed and run in Microsoft Windows platform. With a user-friendly interface, CGHscape provides a method for data smoothing to cope with the intrinsic noise of array data and CNA detection based on SW-ARRAY algorithm. The analysis results can be demonstrated as log2 plots for individual chromosomes or genomic distribution of identified CNAs. With extended applicability, CGHscape can be used for the initial screening and visualization of CNAs facilitating the cataloguing and characterizing chromosomal alterations of a cohort of samples.

Web-Based Computational System for Protein-Protein Interaction Inference

  • Kim, Ki-Bong
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.459-470
    • /
    • 2012
  • Recently, high-throughput technologies such as the two-hybrid system, protein chip, Mass Spectrometry, and the phage display have furnished a lot of data on protein-protein interactions (PPIs), but the data has not been accurate so far and the quantity has also been limited. In this respect, computational techniques for the prediction and validation of PPIs have been developed. However, existing computational methods do not take into account the fact that a PPI is actually originated from the interactions of domains that each protein contains. So, in this work, the information on domain modules of individual proteins has been employed in order to find out the protein interaction relationship. The system developed here, WASPI (Web-based Assistant System for Protein-protein interaction Inference), has been implemented to provide many functional insights into the protein interactions and their domains. To achieve those objectives, several preprocessing steps have been taken. First, the domain module information of interacting proteins was extracted by taking advantage of the InterPro database, which includes protein families, domains, and functional sites. The InterProScan program was used in this preprocess. Second, the homology comparison with the GO (Gene Ontology) and COG (Clusters of Orthologous Groups) with an E-value of $10^{-5}$, $10^{-3}$ respectively, was employed to obtain the information on the function and annotation of each interacting protein of a secondary PPI database in the WASPI. The BLAST program was utilized for the homology comparison.

Atomic Layer Deposition for Display Applications

  • Park, Jin-Seong
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2013.08a
    • /
    • pp.76.1-76.1
    • /
    • 2013
  • Atomic Layer Deposition (ALD) has remarkably developed in semiconductor and nano-structure applications since early 1990. Now, the advantages of ALD process are well-known as controlling atomic-level-thickness, manipulating atomic-level-composition control, and depositing impurity-free films uniformly. These unique properties may accelerate ALD related industries and applications in various functional thin film markets. On the other hand, one of big markets, Display industry, just starts to look at the potential to adopt ALD functional films in emerging display applications, such as transparent and flexible displays. Unlike conventional ALD process strategies (good quality films and stable precursors at high deposition processes), recently major display industries have suggested the following requirements: large area equipment, reasonable throughput, low temperature process, and cost-effective functional precursors. In this talk, it will be mentioned some demands of display industries for applying ALD processes and/or functional films, in terms of emerging display technologies. In fact, the AMOLED (active matrix organic light emitting diode) Television markets are just starting at early 2013. There are a few possibilities and needs to be developing for AMOLED, Flexible and transparent Display markets. Moreover, some basic results will be shown to specify ALD display applications, including transparent conduction oxide, oxide semiconductor, passivation and barrier films.

  • PDF

MALDI-MS: A Powerful but Underutilized Mass Spectrometric Technique for Exosome Research

  • Jalaludin, Iqbal;Lubman, David M.;Kim, Jeongkwon
    • Mass Spectrometry Letters
    • /
    • v.12 no.3
    • /
    • pp.93-105
    • /
    • 2021
  • Exosomes have gained the attention of the scientific community because of their role in facilitating intercellular communication, which is critical in disease monitoring and drug delivery research. Exosome research has grown significantly in recent decades, with a focus on the development of various technologies for isolating and characterizing exosomes. Among these efforts is the use of matrix-assisted laser desorption ionization (MALDI) mass spectrometry (MS), which offers high-throughput direct analysis while also being cost and time effective. MALDI is used less frequently in exosome research than electrospray ionization due to the diverse population of extracellular vesicles and the impurity of isolated products, both of which necessitate chromatographic separation prior to MS analysis. However, MALDI-MS is a more appropriate instrument for the analytical approach to patient therapy, given it allows for fast and label-free analysis. There is a huge drive to explore MALDI-MS in exosome research because the technology holds great potential, most notably in biomarker discovery. With methods such as fingerprint analysis, OMICs profiling, and statistical analysis, the search for biomarkers could be much more efficient. In this review, we highlight the potential of MALDI-MS as a tool for investigating exosomes and some of the possible strategies that can be implemented based on prior research.

OryzaGP 2021 update: a rice gene and protein dataset for named-entity recognition

  • Larmande, Pierre;Liu, Yusha;Yao, Xinzhi;Xia, Jingbo
    • Genomics & Informatics
    • /
    • v.19 no.3
    • /
    • pp.27.1-27.4
    • /
    • 2021
  • Due to the rapid evolution of high-throughput technologies, a tremendous amount of data is being produced in the biological domain, which poses a challenging task for information extraction and natural language understanding. Biological named entity recognition (NER) and named entity normalisation (NEN) are two common tasks aiming at identifying and linking biologically important entities such as genes or gene products mentioned in the literature to biological databases. In this paper, we present an updated version of OryzaGP, a gene and protein dataset for rice species created to help natural language processing (NLP) tools in processing NER and NEN tasks. To create the dataset, we selected more than 15,000 abstracts associated with articles previously curated for rice genes. We developed four dictionaries of gene and protein names associated with database identifiers. We used these dictionaries to annotate the dataset. We also annotated the dataset using pretrained NLP models. Finally, we analysed the annotation results and discussed how to improve OryzaGP.

Recent Research Trends of Exploring Catalysts for Ammonia Synthesis and Decomposition (암모니아 합성 및 분해를 위한 촉매 탐색의 최근 연구 동향)

  • Jong Yeong Kim;Byung Chul Yeo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.487-495
    • /
    • 2023
  • Ammonia is either a crucial resource of fertilizer production for solving the food problem of mankind or an important energy source as both an eco-friendly hydrogen carrier and a carbon-free fuel. Therefore, nowadays ammonia synthesis and decomposition become promising. Then, a catalyst is required to effectively perform the ammonia synthesis and decomposition. In order to design high-performing as well as cheap novel catalysts for ammonia synthesis and decomposition, it is necessary to test huge amount of catalyst candidates, but it is inevitably time-consuming and expensive to search and analyze using only traditional approaches. Recently, new methods using machine learning which is one of the core technologies of the 4th industrial revolution that can quickly and accurately search high-performance catalysts has been emerging. In this paper, we investigate reaction mechanisms of ammonia synthesis and decomposition, and we described recent research and prospects of machine learning-driven methods that can efficiently find high-performing and economical catalysts for ammonia synthesis and decomposition.

The future of bioinformntics

  • Gribskov, Michael
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.1-1
    • /
    • 2003
  • It is clear that computers will play a key role in the biology of the future. Even now, it is virtually impossible to keep track of the key proteins, their names and associated gene names, physical constants(e.g. binding constants, reaction constants, etc.), and hewn physical and genetic interactions without computational assistance. In this sense, computers act as an auxiliary brain, allowing one to keep track of thousands of complex molecules and their interactions. With the advent of gene expression array technology, many experiments are simply impossible without this computer assistance. In the future, as we seek to integrate the reductionist description of life provided by genomic sequencing into complex and sophisticated models of living systems, computers will play an increasingly important role in both analyzing data and generating experimentally testable hypotheses. The future of bioinformatics is thus being driven by potent technological and scientific forces. On the technological side, new experimental technologies such as microarrays, protein arrays, high-throughput expression and three-dimensional structure determination prove rapidly increasing amounts of detailed experimental information on a genomic scale. On the computational side, faster computers, ubiquitous computing systems, high-speed networks provide a powerful but rapidly changing environment of potentially immense power. The challenges we face are enormous: How do we create stable data resources when both the science and computational technology change rapidly? How do integrate and synthesize information from many disparate subdisciplines, each with their own vocabulary and viewpoint? How do we 'liberate' the scientific literature so that it can be incorporated into electronic resources? How do we take advantage of advances in computing and networking to build the international infrastructure needed to support a complete understanding of biological systems. The seeds to the solutions of these problems exist, at least partially, today. These solutions emphasize ubiquitous high-speed computation, database interoperation, federation, and integration, and the development of research networks that capture scientific knowledge rather than just the ABCs of genomic sequence. 1 will discuss a number of these solutions, with examples from existing resources, as well as area where solutions do not currently exist with a view to defining what bioinformatics and biology will look like in the future.

  • PDF

Bandwidth Management of WiMAX Systems and Performance Modeling

  • Li, Yue;He, Jian-Hua;Xing, Weixi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.63-81
    • /
    • 2008
  • WiMAX has been introduced as a competitive alternative for metropolitan broadband wireless access technologies. It is connection oriented and it can provide very high data rates, large service coverage, and flexible quality of services (QoS). Due to the large number of connections and flexible QoS supported by WiMAX, the uplink access in WiMAX networks is very challenging since the medium access control (MAC) protocol must efficiently manage the bandwidth and related channel allocations. In this paper, we propose and investigate a cost-effective WiMAX bandwidth management scheme, named the WiMAX partial sharing scheme (WPSS), in order to provide good QoS while achieving better bandwidth utilization and network throughput. The proposed bandwidth management scheme is compared with a simple but inefficient scheme, named the WiMAX complete sharing scheme (WCPS). A maximum entropy (ME) based analytical model (MEAM) is proposed for the performance evaluation of the two bandwidth management schemes. The reason for using MEAM for the performance evaluation is that MEAM can efficiently model a large-scale system in which the number of stations or connections is generally very high, while the traditional simulation and analytical (e.g., Markov models) approaches cannot perform well due to the high computation complexity. We model the bandwidth management scheme as a queuing network model (QNM) that consists of interacting multiclass queues for different service classes. Closed form expressions for the state and blocking probability distributions are derived for those schemes. Simulation results verify the MEAM numerical results and show that WPSS can significantly improve the network’s performance compared to WCPS.

A review of gene selection methods based on machine learning approaches (기계학습 접근법에 기반한 유전자 선택 방법들에 대한 리뷰)

  • Lee, Hajoung;Kim, Jaejik
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.5
    • /
    • pp.667-684
    • /
    • 2022
  • Gene expression data present the level of mRNA abundance of each gene, and analyses of gene expressions have provided key ideas for understanding the mechanism of diseases and developing new drugs and therapies. Nowadays high-throughput technologies such as DNA microarray and RNA-sequencing enabled the simultaneous measurement of thousands of gene expressions, giving rise to a characteristic of gene expression data known as high dimensionality. Due to the high-dimensionality, learning models to analyze gene expression data are prone to overfitting problems, and to solve this issue, dimension reduction or feature selection techniques are commonly used as a preprocessing step. In particular, we can remove irrelevant and redundant genes and identify important genes using gene selection methods in the preprocessing step. Various gene selection methods have been developed in the context of machine learning so far. In this paper, we intensively review recent works on gene selection methods using machine learning approaches. In addition, the underlying difficulties with current gene selection methods as well as future research directions are discussed.

Design and Implementation of 60 GHz Wi-Fi for Multi-gigabit Wireless Communications (멀티-기가비트 무선 통신을 위한 60GHz Wi-Fi 설계 및 구현)

  • Yoon, Jung-Min;Jo, Ohyun
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.6
    • /
    • pp.43-49
    • /
    • 2020
  • In spite of the notable advancements of millimeter wave communication technologies, the 60 GHz Wi-Fi is still not widespread yet, mainly due to the high limitation of coverage. Conventionally, it has been hardly possible to support a high data rate with fast beam adaptation while keeping atmospheric beamforming coverage. To solve these challenges in the 60 GHz communication system, holistic system designs are considered. we implemented an enhanced design LDPC decoder enabling 6.72 Gbps coded-throughput with minimal implementation loss, and our proposed phase-tracking algorithm guarantees 3.2 dB performance gain at 1 % PER in the case of 16 QAM modulation and LDPC code-rate 3/4.