• 제목/요약/키워드: large-scale data

검색결과 2,776건 처리시간 0.035초

초대형 해석 결과의 분석을 위한 고해상도 타일 가시화 시스템 개발 (High-Resolution Tiled Display System for Visualization of Large-scale Analysis Data)

  • 김홍성;조진연;양진오
    • 한국항공우주학회지
    • /
    • 제34권6호
    • /
    • pp.67-74
    • /
    • 2006
  • 본 논문에서는 저가의 클러스터 컴퓨터 시스템과 저해상도 영상장비들을 이용하여 초대형 해석 데이터를 정밀하게 분석할 수 있는 고해상도 타일 가시화 시스템을 개발하였다. 타일 가시화 하드웨어 구축 시 유의점을 고찰하고, 화면왜곡 현상을 제거할 수 있는 빔프로젝터 위치조절장치를 설계/제작하였다. 타일 가시화 소프트웨어 개발에서 그래픽 사용자 인터페이스와 렌더링을 위해서는 Qt와 OpenGL 라이브러리를 이용하였다. 또한 LAM-MPI 라이브러리를 통해 각각의 클러스터 컴퓨터 노드로부터 얻게 되는 조각적인 화면들을 전체의 한 화면으로 동기화시켜 왜곡 없는 전체 타일 영상을 만들도록 하였다.

Crop Leaf Disease Identification Using Deep Transfer Learning

  • Changjian Zhou;Yutong Zhang;Wenzhong Zhao
    • Journal of Information Processing Systems
    • /
    • 제20권2호
    • /
    • pp.149-158
    • /
    • 2024
  • Traditional manual identification of crop leaf diseases is challenging. Owing to the limitations in manpower and resources, it is challenging to explore crop diseases on a large scale. The emergence of artificial intelligence technologies, particularly the extensive application of deep learning technologies, is expected to overcome these challenges and greatly improve the accuracy and efficiency of crop disease identification. Crop leaf disease identification models have been designed and trained using large-scale training data, enabling them to predict different categories of diseases from unlabeled crop leaves. However, these models, which possess strong feature representation capabilities, require substantial training data, and there is often a shortage of such datasets in practical farming scenarios. To address this issue and improve the feature learning abilities of models, this study proposes a deep transfer learning adaptation strategy. The novel proposed method aims to transfer the weights and parameters from pre-trained models in similar large-scale training datasets, such as ImageNet. ImageNet pre-trained weights are adopted and fine-tuned with the features of crop leaf diseases to improve prediction ability. In this study, we collected 16,060 crop leaf disease images, spanning 12 categories, for training. The experimental results demonstrate that an impressive accuracy of 98% is achieved using the proposed method on the transferred ResNet-50 model, thereby confirming the effectiveness of our transfer learning approach.

화재연구를 위한 대형 콘 칼로리미터의 설계 (Design of Large Cone Calorimeter for the Fire Study)

  • 이의주
    • 한국화재소방학회논문지
    • /
    • 제20권4호
    • /
    • pp.65-71
    • /
    • 2006
  • 최근에 화재연구가 활발해짐에 따라 화재특성을 평가하기 위해 중요한 물리량인 발열량 등을 측정하여 모델의 검증과 화재현상의 이해를 도모하고 있다. 이전의 화재연구에서는 실험공간이나 재정적 여건 때문에 축소모형 실험을 주로 수행하였지만, 다양한 화재의 특성을 축소모형에서 모두 도출할 수 없으므로 실제 규모의 화재에서 특성을 조사하는 것이 필요하다. 그러므로 보통 5MW 이상의 열량을 측정할 수 있는 대형 콘 칼로리미터를 20년 전부터 외국에서는 개발하였으며, 새로운 관련기술과 화재에 관한 지식을 바탕으로 개선되고 발전되어 왔다. 본 연구에서는 대형 콘 칼로리미터를 설계할 때 고려하여야 할 방법들을 각 요소별로 설명하고, 이를 통해 향후 개선을 위해 요구되는 지식이나 기술들을 제안하였다.

국내외 수전해 기술 및 대규모 실증 프로젝트 진행 현황 (Current Status of Water Electrolysis Technology and Large-scale Demonstration Projects in Korea and Overseas)

  • 백종민;김수현
    • 한국수소및신에너지학회논문집
    • /
    • 제35권1호
    • /
    • pp.14-26
    • /
    • 2024
  • Global efforts continue with the goal of transition to a "carbon neutral (net zero)" society with zero carbon emissions by 2050. For this purpose, the technology of water electrolysis is being developed, which can store electricity generated from renewable energies in large quantities and over a long period of time as hydrogen. Recently, various research and large-scale projects on 'green hydrogen', which has no carbon emissions, are being conducted. In this paper, a comparison of water electrolysis technologies was carried out and, based on data provided by the International Energy Agency (IEA), large-scale water electrolysis demonstration projects were analyzed by classifying them by technology, power supply, country and end user. It is expected that through the analysis of large-scale water electrolysis demonstration projects, research directions and road maps can be provided for the development/implementation of commercial projects in the future.

Computational analysis of large-scale genome expression data

  • Zhang, Michael
    • 한국생물정보학회:학술대회논문집
    • /
    • 한국생물정보시스템생물학회 2000년도 International Symposium on Bioinformatics
    • /
    • pp.41-44
    • /
    • 2000
  • With the advent of DNA microarray and "chip" technologies, gene expression in an organism can be monitored on a genomic scale, allowing the transcription levels of many genes to be measured simultaneously. Functional interpretation of massive expression data and linking such data to DNA sequences have become the new challenges to bioinformatics. I will us yeast cell cycle expression data analysis as an example to demonstrate how special database and computational methods may be used for extracting functional information, I will also briefly describe a novel clustering algorithm which has been applied to the cell cycle data.

  • PDF

HORIZON RUN 4 SIMULATION: COUPLED EVOLUTION OF GALAXIES AND LARGE-SCALE STRUCTURES OF THE UNIVERSE

  • KIM, JUHAN;PARK, CHANGBOM;L'HUILLIER, BENJAMIN;HONG, SUNGWOOK E.
    • 천문학회지
    • /
    • 제48권4호
    • /
    • pp.213-228
    • /
    • 2015
  • The Horizon Run 4 is a cosmological N-body simulation designed for the study of coupled evolution between galaxies and large-scale structures of the Universe, and for the test of galaxy formation models. Using 63003 gravitating particles in a cubic box of Lbox = 3150 h−1Mpc, we build a dense forest of halo merger trees to trace the halo merger history with a halo mass resolution scale down to Ms = 2.7 × 1011h−1M. We build a set of particle and halo data, which can serve as testbeds for comparison of cosmological models and gravitational theories with observations. We find that the FoF halo mass function shows a substantial deviation from the universal form with tangible redshift evolution of amplitude and shape. At higher redshifts, the amplitude of the mass function is lower, and the functional form is shifted toward larger values of ln(1/σ). We also find that the baryonic acoustic oscillation feature in the two-point correlation function of mock galaxies becomes broader with a peak position moving to smaller scales and the peak amplitude decreasing for increasing directional cosine μ compared to the linear predictions. From the halo merger trees built from halo data at 75 redshifts, we measure the half-mass epoch of halos and find that less massive halos tend to reach half of their current mass at higher redshifts. Simulation outputs including snapshot data, past lightcone space data, and halo merger data are available at http://sdss.kias.re.kr/astro/Horizon-Run4.

Very deep super-resolution for efficient cone-beam computed tomographic image restoration

  • Hwang, Jae Joon;Jung, Yun-Hoa;Cho, Bong-Hae;Heo, Min-Suk
    • Imaging Science in Dentistry
    • /
    • 제50권4호
    • /
    • pp.331-337
    • /
    • 2020
  • Purpose: As cone-beam computed tomography (CBCT) has become the most widely used 3-dimensional (3D) imaging modality in the dental field, storage space and costs for large-capacity data have become an important issue. Therefore, if 3D data can be stored at a clinically acceptable compression rate, the burden in terms of storage space and cost can be reduced and data can be managed more efficiently. In this study, a deep learning network for super-resolution was tested to restore compressed virtual CBCT images. Materials and Methods: Virtual CBCT image data were created with a publicly available online dataset (CQ500) of multidetector computed tomography images using CBCT reconstruction software (TIGRE). A very deep super-resolution (VDSR) network was trained to restore high-resolution virtual CBCT images from the low-resolution virtual CBCT images. Results: The images reconstructed by VDSR showed better image quality than bicubic interpolation in restored images at various scale ratios. The highest scale ratio with clinically acceptable reconstruction accuracy using VDSR was 2.1. Conclusion: VDSR showed promising restoration accuracy in this study. In the future, it will be necessary to experiment with new deep learning algorithms and large-scale data for clinical application of this technology.

Automatic 3D soil model generation for southern part of the European side of Istanbul based on GIS database

  • Sisman, Rafet;Sahin, Abdurrahman;Hori, Muneo
    • Geomechanics and Engineering
    • /
    • 제13권6호
    • /
    • pp.893-906
    • /
    • 2017
  • Automatic large scale soil model generation is very critical stage for earthquake hazard simulation of urban areas. Manual model development may cause some data losses and may not be effective when there are too many data from different soil observations in a wide area. Geographic information systems (GIS) for storing and analyzing spatial data help scientists to generate better models automatically. Although the original soil observations were limited to soil profile data, the recent developments in mapping technology, interpolation methods, and remote sensing have provided advanced soil model developments. Together with advanced computational technology, it is possible to handle much larger volumes of data. The scientists may solve difficult problems of describing the spatial variation of soil. In this study, an algorithm is proposed for automatic three dimensional soil and velocity model development of southern part of the European side of Istanbul next to Sea of Marmara based on GIS data. In the proposed algorithm, firstly bedrock surface is generated from integration of geological and geophysical measurements. Then, layer surface contacts are integrated with data gathered in vertical borings, and interpolations are interpreted on sections between the borings automatically. Three dimensional underground geology model is prepared using boring data, geologic cross sections and formation base contours drawn in the light of these data. During the preparation of the model, classification studies are made based on formation models. Then, 3D velocity models are developed by using geophysical measurements such as refraction-microtremor, array microtremor and PS logging. The soil and velocity models are integrated and final soil model is obtained. All stages of this algorithm are carried out automatically in the selected urban area. The system directly reads the GIS soil data in the selected part of urban area and 3D soil model is automatically developed for large scale earthquake hazard simulation studies.

대용량 공간 자료들의 세그먼테이션에서의 모수들의 최적화 (Optimization of parameters in segmentation of large-scale spatial data sets)

  • 오미라;이현주
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2008년도 하계종합학술대회
    • /
    • pp.897-898
    • /
    • 2008
  • Array comparative genomic hybridization (aCGH) has been used to detect chromosomal regions of amplifications or deletions, which allows identification of new cancer related genes. As aCGH, a large-scale spatial data, contains significant amount of noises in its raw data, it has been an important research issue to segment genomic DNA regions to detect its true underlying copy number aberrations (CNAs). In this study, we focus on applying a segmentation method to multiple data sets. We compare two different threshold values for analyzing aCGH data with CBS method [1]. The proposed threshold values are p-value or $Q{\pm}1.5IQR$ and $Q{\pm}1.5IQR$.

  • PDF

유전체 코호트 연구의 주요 통계학적 과제 (Statistical Issues in Genomic Cohort Studies)

  • 박소희
    • Journal of Preventive Medicine and Public Health
    • /
    • 제40권2호
    • /
    • pp.108-113
    • /
    • 2007
  • When conducting large-scale cohort studies, numerous statistical issues arise from the range of study design, data collection, data analysis and interpretation. In genomic cohort studies, these statistical problems become more complicated, which need to be carefully dealt with. Rapid technical advances in genomic studies produce enormous amount of data to be analyzed and traditional statistical methods are no longer sufficient to handle these data. In this paper, we reviewed several important statistical issues that occur frequently in large-scale genomic cohort studies, including measurement error and its relevant correction methods, cost-efficient design strategy for main cohort and validation studies, inflated Type I error, gene-gene and gene-environment interaction and time-varying hazard ratios. It is very important to employ appropriate statistical methods in order to make the best use of valuable cohort data and produce valid and reliable study results.