• Title/Summary/Keyword: cluster calculation

Search Result 141, Processing Time 0.026 seconds

Photoluminescence Properties of $Zn_{2-x-y}SiO_4:Mn_x,\;M_y$ Phosphors ($Zn_{2-x-y}SiO_4:Mn_x,\;M_y$계 형광체의 발광특성)

  • Cho, Bong Hyun;Sohn, Kee Sun;Park, Hee Dong;Chang, Hyun Ju;Hwang, Taek Sung
    • Journal of the Korean Chemical Society
    • /
    • v.43 no.2
    • /
    • pp.206-212
    • /
    • 1999
  • The main objective of the present investigation is to improve the photoluminescent performance of existing $Zn_2SiO_4:Mn$ phosphors by introducing a new co-dopant. The co-doping effect of Mg and/or Cr upon emission intensity and decay time was studied in the present investigation. The co-dopants incorporated into the $Zn_2SiO_4:Mn$ phosphors are believed to alter the internal energy state so that the change in emission intensity and decay time can be expected. Both Mg and Cr ions have a favourable influence on photoluminescence prpperties, for example, the Mg ion enhances the intensity of manganese green emission and the Cr ion shortens the decay time. The enhancement in emission intensity of $Zn_2SiO_4:Mn,\;Mg$ phosphors was interpreted by taking into account the result from the DV-X${\alpha}$ embedded cluster calculation. On the other hand, the energy transfer between Mn and Cr ions was found to be responsible for the shortening of decay time in$Zn_2SiO_4:Mn,\;Cr$ phosphors.

  • PDF

Implementation of Massive FDTD Simulation Computing Model Based on MPI Cluster for Semi-conductor Process (반도체 검증을 위한 MPI 기반 클러스터에서의 대용량 FDTD 시뮬레이션 연산환경 구축)

  • Lee, Seung-Il;Kim, Yeon-Il;Lee, Sang-Gil;Lee, Cheol-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.9
    • /
    • pp.21-28
    • /
    • 2015
  • In the semi-conductor process, a simulation process is performed to detect defects by analyzing the behavior of the impurity through the physical quantity calculation of the inner element. In order to perform the simulation, Finite-Difference Time-Domain(FDTD) algorithm is used. The improvement of semiconductor which is composed of nanoscale elements, the size of simulation is getting bigger. Problems that a processor such as CPU or GPU cannot perform the simulation due to the massive size of matrix or a computer consist of multiple processors cannot handle a massive FDTD may come up. For those problems, studies are performed with parallel/distributed computing. However, in the past, only single type of processor was used. In GPU's case, it performs fast, but at the same time, it has limited memory. On the other hand, in CPU, it performs slower than that of GPU. To solve the problem, we implemented a computing model that can handle any FDTD simulation regardless of size on the cluster which consist of heterogeneous processors. We tested the simulation on processors using MPI libraries which is based on 'point to point' communication and verified that it operates correctly regardless of the number of node and type. Also, we analyzed the performance by measuring the total execution time and specific time for the simulation on each test.

Implementation of Multicore-Aware Load Balancing on Clusters through Data Distribution in Chapel (클러스터 상에서 다중 코어 인지 부하 균등화를 위한 Chapel 데이터 분산 구현)

  • Gu, Bon-Gen;Carpenter, Patrick;Yu, Weikuan
    • The KIPS Transactions:PartA
    • /
    • v.19A no.3
    • /
    • pp.129-138
    • /
    • 2012
  • In distributed memory architectures like clusters, each node stores a portion of data. How data is distributed across nodes influences the performance of such systems. The data distribution scheme is the strategy to distribute data across nodes and realize parallel data processing. Due to various reasons such as maintenance, scale up, upgrade, etc., the performance of nodes in a cluster can often become non-identical. In such clusters, data distribution without considering performance cannot efficiently distribute data on nodes. In this paper, we propose a new data distribution scheme based on the number of cores in nodes. We use the number of cores as the performance factor. In our data distribution scheme, each node is allocated an amount of data proportional to the number of cores in it. We implement our data distribution scheme using the Chapel language. To show our data distribution is effective in reducing the execution time of parallel applications, we implement Mandelbrot Set and ${\pi}$-Calculation programs with our data distribution scheme, and compare the execution times on a cluster. Based on experimental results on clusters of 8-core and 16-core nodes, we demonstrate that data distribution based on the number of cores can contribute to a reduction in the execution times of parallel programs on clusters.

A Study of Computational Literature Analysis based Classification for a Pairwise Comparison by Contents Similarity in a section of Tokkijeon, 'Fish Tribe Conference' (컴퓨터 문헌 분석 기반의 토끼전 '어족회의' 대목 내용 유사도에 따른 이본 계통 분류 연구)

  • Kim, Dong-Keon;Jeong, Hwa-Young
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.5
    • /
    • pp.15-25
    • /
    • 2022
  • This study aims to identify the family and lineage of a part of a "Fish Tribe Conference" in the section Tokkijeon by utilizing computer literature analysis techniques. First of all, we encode the classification for a pairwise comparison's type of each paragraph to build a corpus, and based on this, we use the Hamming distance to calculate the distance matrix between each classification for a pairwise comparison's. We visualized classification for a pairwise comparison's clustering pattern by applying multidimensional scale method, and hierarchical clustering to explore the characteristics of the 'fish family' line and lineage compared to the existing cluster analysis study on entire paragraphs of "Tokkijeon". As a result, unlike the cluster analysis of the entire paragraph of "Tokkijeon", which consists of six categories, the "Fish Tribe Conference" section has five categories and some classification for a pairwise comparison's accesses. The results of this study are that the relative distance between Yibon was measured and systematic classification was performed in an objective and empirical way by calculation, and the characteristics of the line of the fish family were revealed compared to the analysis of the entire rabbit exhibition.

A Study on Distributed System Construction and Numerical Calculation Using Raspberry Pi

  • Ko, Young-ho;Heo, Gyu-Seong;Lee, Sang-Hyun
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.194-199
    • /
    • 2019
  • As the performance of the system increases, more parallelized data is being processed than single processing of data. Today's cpu structure has been developed to leverage multicore, and hence data processing methods are being developed to enable parallel processing. In recent years desktop cpu has increased multicore, data is growing exponentially, and there is also a growing need for data processing as artificial intelligence develops. This neural network of artificial intelligence consists of a matrix, making it advantageous for parallel processing. This paper aims to speed up the processing of the system by using raspberrypi to implement the cluster building and parallel processing system against the backdrop of the foregoing discussion. Raspberrypi is a credit card-sized single computer made by the raspberrypi Foundation in England, developed for education in schools and developing countries. It is cheap and easy to get the information you need because many people use it. Distributed processing systems should be supported by programs that connected multiple computers in parallel and operate on a built-in system. RaspberryPi is connected to switchhub, each connected raspberrypi communicates using the internal network, and internally implements parallel processing using the Message Passing Interface (MPI). Parallel processing programs can be programmed in python and can also use C or Fortran. The system was tested for parallel processing as a result of multiplying the two-dimensional arrangement of 10000 size by 0.1. Tests have shown a reduction in computational time and that parallelism can be reduced to the maximum number of cores in the system. The systems in this paper are manufactured on a Linux-based single computer and are thought to require testing on systems in different environments.

Development of Korean Patient Classification System for Neonatal Care Nurses (한국형 신생아중환자간호 분류도구 개발)

  • Yu, Mi;Kim, Dong Yeon;Yoo, Cheong Suk
    • Journal of Korean Clinical Nursing Research
    • /
    • v.22 no.2
    • /
    • pp.205-216
    • /
    • 2016
  • Purpose: This study was performed to develop a valid and reliable Korean Patient Classification System for Neonatal care nurses (KPCSN). Methods: The study was conducted in tertiary and general hospitals with 1~2 grade according to nursing fee differentiation policy for NICU (neonatal intensive care unit) nurse staffing. The reliability was evaluated for the classification of 218 patients by 10 nurse managers and 56 staff nurses working in NICUs from 10 hospitals. To verify construct validity, 208 patients were classified and compared for the type of stay, gestational age, birth weight, and current body weight. Nursing time was measured by nurses, nurse managers, and nurse aids. For the calculation of conversion index (total nursing time divided by the KPCSN score), 426 patients were classified using the KPCSN. Data were collected from September 5 to October 28, 2015, and analyzed using t-test, ANOVA, intraclass correlation coefficient, and non-hierarchial cluster analysis. Results: The final KPCSN consisted of 11 nursing categories, 71 nursing activities and 111 criteria. The reliability of the KPCSN was r=.83 (p<.001). The construct validity was established. The KPCSN score was classified into four groups; group $1:{\leq}57points$, group 2: 58~80 points, group 3: 81~108 points, and group $4:{\geq}109points$ in the KPSCN score. The conversion index was calculated as 7.45 minutes/classification score. Conclusion: The KPCSN can be utilized to measure specific and complex nursing demands for infants receiving care in the NICUs.

Verification of the PMCEPT Monte Carlo dose Calculation Code for Simulations in Medical Physics (의학물리 분야에 사용하기 위한 PMCEPT 몬테카를로 도즈계산용 코드 검증)

  • Kum, O-Yeon
    • Progress in Medical Physics
    • /
    • v.19 no.1
    • /
    • pp.21-34
    • /
    • 2008
  • The parallel Monte Carlo electron and photon transport (PMCEPT) code [Kum and Lee, J. Korean Phys. Soc. 47, 716 (2006)] for calculating electron and photon beam doses has been developed based on the three dimensional geometry defined by computed tomography (CT) images and implemented on the Beowulf PC cluster. Understanding the limitations of Monte Carlo codes is useful in order to avoid systematic errors in simulations and to suggest further improvement of the codes. We evaluated the PMCEPT code by comparing its normalized depth doses for electron and photon beams with those of MCNP5, EGS4, DPM, and GEANT4 codes, and with measurements. The PMCEPT results agreed well with others in homogeneous and heterogeneous media within an error of $1{\sim}3%$ of the dose maximum. The computing time benchmark has also been performed for two cases, showing that the PMCEPT code was approximately twenty times faster than the MCNP5 for 20-MeV electron beams irradiated on the water phantom. For the 18-MV photon beams irradiated on the water phantom, the PMCEPT was three times faster than the GEANT4. Thus, the results suggest that the PMCEPT code is indeed appropriate for both fast and accurate simulations.

  • PDF

Independent Component Analysis for Clustering Components by Using Fixed-Point Algorithm of Secant Method and Kurtosis (할선법의 고정점 알고리즘과 첨도에 의한 군집성의 독립성분분석)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.336-341
    • /
    • 2004
  • This paper proposes an independent component analysis(ICA) of the fixed-point (FP) algorithm based on secant method and the kurtosis. The FP algorithm based on secant method is applied to improve the analysis speed and performance by simplifying the calculation process of the complex derivative in Newton method, the kurtosis is applied to cluster the components. The proposed ICA has been applied to the problems for separating the 6-mixed signals of 500 samples and 8-mixed images of $512{\times}512$ pixels, respectively. The experimental results show that the proposed ICA has always a fixed analysis sequence. The result can be solved the limit of conventional ICA based on secant method which has a variable sequence depending on the running of algorithm. Especially, the proposed ICA can be used for classifying and identifying the signals or the images.

CLB-Based CPLD Low Power Technology Mapping A1gorithm for Trade-off (상관관계에 의한 CLB구조의 CPLD 저전력 기술 매핑 알고리즘)

  • Kim Jae-Jin;Lee Kwan-Houng
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.2 s.34
    • /
    • pp.49-57
    • /
    • 2005
  • In this paper. a CLB-based CPLD low power technology mapping algorithm for trade-off is proposed. To perform low power technology mapping for CPLD, a given Boolean network has to be represented to DAG. The proposed algorithm consists of three step. In the first step, TD(Transition Density) calculation have to be Performed. Total power consumption is obtained by calculating switching activity of each nodes in a DAG. In the second step, the feasible clusters are generated by considering the following conditions : the number of output. the number of input and the number of OR-terms for CLB within a CPLD. The common node cluster merging method, the node separation method, and the node duplication method are used to produce the feasible clusters. The proposed algorithm is examined by using benchmarks in SIS. In the case that the number of OR-terms is 5, the experiments results show reduction in the power consumption by 30.73$\%$ comparing with that of TEMPLA, and 17.11$\%$ comparing with that of PLAmap respectively

  • PDF

Classification of Land Cover over the Korean Peninsula using MODIS Data (MODIS 자료를 이용한 한반도 지면피복 분류)

  • Kang, Jeon-Ho;Suh, Myoung-Seok;Kwak, Chong-Heum
    • Atmosphere
    • /
    • v.19 no.2
    • /
    • pp.169-182
    • /
    • 2009
  • To improve the performance of climate and numerical models, concerns on the land-atmosphere schemes are steadily increased in recent years. For the realistic calculation of land-atmosphere interaction, a land surface information of high quality is strongly required. In this study, a new land cover map over the Korean peninsula was developed using MODIS (MODerate resolution Imaging Spectroradiometer) data. The seven phenological data set (maximum, minimum, amplitude, average, growing period, growing and shedding rate) derived from 15-day normalized difference vegetation index (NDVI) were used as a basic input data. The ISOData (Iterative Self-Organizing Data Analysis), a kind of unsupervised non-hierarchical clustering method, was applied to the seven phenological data set. After the clustering, assignment of land cover type to the each cluster was performed according to the phenological characteristics of each land cover defined by USGS (US. Geological Survey). Most of the Korean peninsula are occupied by deciduous broadleaf forest (46.5%), mixed forest (15.6%), and dryland crop (13%). Whereas, the dominant land cover types are very diverse in South-Korea: evergreen needleleaf forest (29.9%), mixed forest (26.6%), deciduous broadleaf forest (16.2%), irrigated crop (12.6%), and dryland crop (10.7%). The 38 in-situ observation data-base over South-Korea, Environment Geographic Information System and Google-earth are used in the validation of the new land cover map. In general, the new land cover map over the Korean peninsula seems to be better classified compared to the USGS land cover map, especially for the Savanna in the USGS land cover map.