• Title/Summary/Keyword: case module

Search Result 938, Processing Time 0.04 seconds

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

The study of the status of teaching and learning and needs assessment for 'The basis of the Invention Patent' subjects ('발명.특허 기초' 과목의 교수.학습 방법 실태 및 요구 조사 연구)

  • Lee, Chan Joo;Lee, Byung Wook;Kang, Kyoung Kyoon;Im, Yoo Hwa
    • 대한공업교육학회지
    • /
    • v.38 no.1
    • /
    • pp.105-124
    • /
    • 2013
  • This study aims to analyze the teaching and learning methods realities and needs in the subjects 'The basis of the Invention Patent'. To this end, research and analysis of the subjects 'The basis of the Invention Patent' the teacher and take advantage of their current teaching and learning methods, 'The basis of the Invention Patent' subject teachers to recognize the most desirable teaching and learning methods, subjects 'The basis of the Invention Patent' of teaching and learning and the operating requirements of the difficult matters. Survey of 48 schools across the country in high school teachers to teach the subjects 'The basis of the Invention Patent' was conducted, the results of this study are as follows. First, a high percentage of theoretical learning activities, teaching methods, such as 'lectures' and take advantage of the higher percentage. Module was to conduct classes such as 'project', 'lab experiments', 'discussion', 'investigation' by taking advantage of the high proportion of practice learning activities. Second, Higher requirements for the experience and practice of student-centered 'lab experiments', 'project', 'Case Studies', 'field trips' and theory-driven rather than 'lectures'. Third, 'The basis of the Invention Patent' subjects 'Teaching and learning important when operating requirements for the degree' as a whole was highly recognized. in particular, operating requirements for teaching and learning in accordance with the former college of education of education than non-group differences were higher overall response. Fourth, 'The basis of the Invention Patent' subjects 'Teaching and learning difficult when operating your degree' as a whole was highly recognized. In particular, was recognized by difficult questions, such as lack of preparation classes due to excessive work, educational facilities and equipment shortage, lack of prior knowledge about the subject, individual differences of the students considering the difficulties, student's

Study on Overcoming Interference Factor by Automatic Synthesizer in Endotoxin Test (내독소 검사에서 자동합성장치에 따른 간섭요인 극복에 대한 연구)

  • Kim, Dong Il;Kim, Si Hwal;Chi, Yong Gi;Seok, Jae Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.3-6
    • /
    • 2012
  • Purpose : Samsung medical ceter shall find a cause of the interference factor and suggest a solution for it. Materials and Methods : A sample of $^{18}F$-FDG, radioactive pharmaceuticals produced by TRACERlab MX and FASTlab synthesizer. Gel-clot method uses Positive control tube and single test tube. Kinetic chromogenic method uses ENDOSAFE-PTS produced by Charles River. Results : According to Gel clot method of Endotoxin Tests at FASTlab, both turbidity and viscosity increased at 40-fold dilution and Gel clot was detected. In case of TRACERlab MX, Gel clot was detected in most of samples but intermittently not in a few of them. When using ENDOSAFE-PTS, sample CV (Coefficient of Variation) of FASTlab is 0% at all dilution rates whereas spike CV is 0% at 1-fold dilution, 0~35% at 10-fold, 3.6~12.9% at 20-fold, 5.2~7.1% at 30-fold, 1.1~17.4% at 40-fold, spike recovery; 0% at one-fold, 25 ~ 58% at 10-fold, 50 ~ 86% at 20-fold, 70~92% at 30-fold, and 75~120% at 40-fold. Sample CV of TRACERlab MX, is 0% at all dilution rates whereas spike CV is 1.4~4.8% at one-fold dilution, 0.6~19.9% at 10-fold, spike recovery; 35~72% at one-fold dilution and 77~107% at 10-fold. Conclusion : Gel clot does not seem to occur probably to H3PO4 which engages in bonding with Mg2+ion contributing gelation inside PCT. Dilution which is identical to reducing the amount of H3PO4, could remove interfering effects accordingly. Spike recovery was obtained within 70~150% - recommended values of supplier - at 40-fold dilution even in kinetic chromogenic method.

  • PDF

Understanding Problem-Solving Type Inquiry Learning and it's Effect on the Improvement of Ability to Design Experiments: A Case Study on Science-Gifted Students (문제해결형 탐구학습에 대한 인식과 학습이 실험 설계 능력에 미친 효과 : 과학 영재학생들에 대한 사례 연구)

  • Ju, Mi-Na;Kim, Hyun-Joo
    • Journal of The Korean Association For Science Education
    • /
    • v.33 no.2
    • /
    • pp.425-443
    • /
    • 2013
  • We developed problem-solving type inquiry learning programs reflecting scientists' research process and analyzed the activities of science-gifted high school students, and the understanding and the effects of the programs after implementation in class. For this study, twelve science-gifted students in the 10th grade participated in the program, which consisted of three different modules - making a cycloidal pendulum, surface growth, and synchronization using metronomes. Diet Cola Test (DCT) was used to find out the effect on the improvement of the ability to design experiments by comparing pre/post scores, with a survey and an interview being conducted after the class. Each module consisted of a series of processes such as questioning the phenomenon scientifically, designing experiments to find solutions, and doing activities to solve the problems. These enable students to experience problem-solving type research process through the program class. According to this analysis, most students were likely to understand the characteristics of problem-solving type inquiry learning programs reflecting the scientists' research process. According to the students, there are some differences between this program class and existing school class. The differences are: 'explaining phenomenon scientifically,' 'designing experiments for themselves,' and 'repeating the experiments several times.' During the class students have to think continuously, design several experiments, and carry them out to solve the problems they found at first. Then finally, they were able to solve the problems. While repeating this kind of activities they have been able to experience the scientists' research process. Also, they showed a positive attitude toward the scientists' research by understanding problem-solving type research process. These problem-solving type inquiry learning programs seem to have positive effects on students in designing experiments and offering the opportunity for critical argumentation on the causes of the phenomena. The results of comparing pre/post scores for DCT revealed that almost every student has improved his/her ability to design experiments. Students who were accustomed to following teacher's instructions have had difficulty in designing the experiments for themselves at the beginning of the class, but gradually, they become used to doing it through the class and finally were able to do it systematically.

Effects of CuO and ${B_2}{O_3}$Additions on Microwave Dielectric Properties of $PbWO_4$-$TiO_2$Ceramic (CuO ${B_2}{O_3}$첨가에 따른 $PbWO_4$-$TiO_2$세라믹스의 마이크로파 유전특성)

  • 최병훈;이경호
    • Journal of the Korean Ceramic Society
    • /
    • v.38 no.11
    • /
    • pp.1046-1054
    • /
    • 2001
  • Effects of B$_2$O$_3$and CuO addition on the microwave dielectric properties of the PbWO$_4$-TiO$_2$ceramics were investigated in order to use this material as an LTCC material for fabrication of a multilayered RF passive components module. We found that PbWO$_4$could be used as an LTCC material because of its low sintering temperature (8$50^{\circ}C$) and fairy good microwave dielectric properties($\varepsilon$$_{r}$=21.5, Q$\times$f$_{0}$=37200 GHz and $\tau$$_{f}$ =-31 ppm/$^{\circ}C$). In order to stabilize $\tau$$_{f}$ of PbWO$_4$, TiO$_2$was added to the PbWO$_4$and the mixture was sintered at 8$50^{\circ}C$. A near zero $\tau$$_{f}$ value (+0.2 ppm/$^{\circ}C$) was obtained with 8.7 mol% TiO$_2$addition. $\varepsilon$r and Q$\times$f$_{0}$ values were 22.3 and 21400 GHz, respectively. It was believed that the decrement of Q$\times$f$_{0}$ value with TiO$_2$addition was resulted from increasing grain boundary. In order to improve Q$\times$f$_{0}$, various amounts of B$_2$O$_3$and CuO were added to the 0.913PbWO$_4$-0.087TiO$_2$mixture. The optimum amount of CuO was 0.05 wt%. At this addition, the 0.913PbWO$_4$-0.087TiO$_2$ceramic showed $\varepsilon$$_{r}$=23.5, $\tau$$_{f}$ =-2.2ppm/$^{\circ}C$, and Q$\times$f$_{0}$=32900 GHz after sintered at 8$50^{\circ}C$. In case of B$_2$O$_3$addition, the optimum amount range was 1.0~2.5 wt% at which we could obtain following results; $\varepsilon$$_{r}$=20.3~22.1, Q$\times$f$_{0}$=48700~54700 GHz, and $\tau$$_{f}$ =+2.4~+8.2ppm/$^{\circ}C$.

  • PDF

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

Positron Annihilation Spectroscopy of Active Galactic Nuclei

  • Doikov, Dmytry N.;Yushchenko, Alexander V.;Jeong, Yeuncheol
    • Journal of Astronomy and Space Sciences
    • /
    • v.36 no.1
    • /
    • pp.21-33
    • /
    • 2019
  • This paper focuses on the interpretation of radiation fluxes from active galactic nuclei. The advantage of positron annihilation spectroscopy over other methods of spectral diagnostics of active galactic nuclei (therefore AGN) is demonstrated. A relationship between regular and random components in both bolometric and spectral composition of fluxes of quanta and particles generated in AGN is found. We consider their diffuse component separately and also detect radiative feedback after the passage of high-velocity cosmic rays and hard quanta through gas-and-dust aggregates surrounding massive black holes in AGN. The motion of relativistic positrons and electrons in such complex systems produces secondary radiation throughout the whole investigated region of active galactic nuclei in form of cylinder with radius R= 400-1000 pc and height H=200-400 pc, thus causing their visible luminescence across all spectral bands. We obtain radiation and electron energy distribution functions depending on the spatial distribution of the investigated bulk of matter in AGN. Radiation luminescence of the non-central part of AGN is a response to the effects of particles and quanta falling from its center created by atoms, molecules and dust of its diffuse component. The cross-sections for the single-photon annihilation of positrons of different energies with atoms in these active galactic nuclei are determined. For the first time we use the data on the change in chemical composition due to spallation reactions induced by high-energy particles. We establish or define more accurately how the energies of the incident positron, emitted ${\gamma}-quantum$ and recoiling nucleus correlate with the atomic number and weight of the target nucleus. For light elements, we provide detailed tables of all indicated parameters. A new criterion is proposed, based on the use of the ratio of the fluxes of ${\gamma}-quanta$ formed in one- and two-photon annihilation of positrons in a diffuse medium. It is concluded that, as is the case in young supernova remnants, the two-photon annihilation tends to occur in solid-state grains as a result of active loss of kinetic energy of positrons due to ionisation down to thermal energy of free electrons. The single-photon annihilation of positrons manifests itself in the gas component of active galactic nuclei. Such annihilation occurs as interaction between positrons and K-shell electrons; hence, it is suitable for identification of the chemical state of substances comprising the gas component of the investigated media. Specific physical media producing high fluxes of positrons are discussed; it allowed a significant reduction in the number of reaction channels generating positrons. We estimate the brightness distribution in the ${\gamma}-ray$ spectra of the gas-and-dust media through which positron fluxes travel with the energy range similar to that recorded by the Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) research module. Based on the results of our calculations, we analyse the reasons for such a high power of positrons to penetrate through gas-and-dust aggregates. The energy loss of positrons by ionisation is compared to the production of secondary positrons by high-energy cosmic rays in order to determine the depth of their penetration into gas-and-dust aggregations clustered in active galactic nuclei. The relationship between the energy of ${\gamma}-quanta$ emitted upon the single-photon annihilation and the energy of incident electrons is established. The obtained cross sections for positron interactions with bound electrons of the diffuse component of the non-central, peripheral AGN regions allowed us to obtain new spectroscopic characteristics of the atoms involved in single-photon annihilation.

THE EFFECT OF THE REPEATABILITY FILE IN THE NIRS EATTY ACIDS ANALYSIS OF ANIMAL EATS

  • Perez Marin, M.D.;De Pedro, E.;Garcia Olmo, J.;Garrido Varo, A.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.4107-4107
    • /
    • 2001
  • Previous works have shown the viability of NIRS technology for the prediction of fatty acids in Iberian pig fat, but although the resulting equations showed high precision, in the predictions of new samples important fluctuations were detected, greater with the time passed from calibration development to NIRS analysis. This fact makes the use of NIRS calibrations in routine analysis difficult. Moreover, this problem only appears in products like fat, that show spectrums with very defined absorption peaks at some wavelengths. This circumstance causes a high sensibility to small changes of the instrument, which are not perceived with the normal checks. To avoid these inconveniences, the software WinISI 1.04 has a mathematic algorithm that consist of create a “Repeatability File”. This file is used during calibration development to minimize the variation sources that can affect the NIRS predictions. The objective of the current work is the evaluation of the use of a repeatability file in quantitative NIRS analysis of Iberian pig fat. A total of 188 samples of Iberian pig fat, produced by COVAP, were used. NIR data were recorded using a FOSS NIRSystems 6500 I spectrophotometer equipped with a spinning module. Samples were analysed by folded transmission, using two sample cells of 0.1mm pathlength and gold surface. High accuracy calibration equations were obtained, without and with repeatability file, to determine the content of six fatty acids: miristic (SECV$\sub$without/=0.07% r$^2$$\sub$without/=0.76 and SECV$\sub$with/=0.08% r$^2$$\sub$with/=0.65), Palmitic (SECV$\sub$without/=0.28 r$^2$$\sub$without/=0.97 and SECV$\sub$with/=0.24% r$^2$$\sub$with/=0.98), palmitoleic (SECV$\sub$without/=0.08 r$^2$$\sub$without/=0.94 and SECV$\sub$with/=0.09% r$^2$$\sub$with/=0.92), Stearic (SECV$\sub$without/=0.27 r$^2$$\sub$without/=0.97 and SECV$\sub$with/=0.29% r$^2$$\sub$with/=0.96), oleic (SECV$\sub$without/=0.20 r$^2$$\sub$without/=0.99 and SECV$\sub$with/=0.20% r$^2$$\sub$with/=0.99) and linoleic (SECV$\sub$without/=0.16 r$^2$$\sub$without/=0.98 and SECV$\sub$with/=0.16% r$^2$$\sub$with/=0.98). The use of a repeatability file like a tool to reduce the variation sources that can disturbed the prediction accuracy was very effective. Although in calibration results the differences are negligible, the effect caused by the repeatability file is appreciated mainly when are predicted new samples that are not in the calibration set and whose spectrum were recorded a long time after the equation development. In this case, bias values corresponding to fatty acids predictions were lower when the repeatability file was used: miristic (bias$\sub$without/=-0.05 and bias$\sub$with/=-0.04), Palmitic (bias$\sub$without/=-0.42 and bias$\sub$with/=-0.11), Palmitoleic (bias$\sub$without/=-0.03 and bias$\sub$with/=0.03), Stearic (bias$\sub$without/=0.47 and bias$\sub$with/=0.28), oleic (bias$\sub$without/=0.14 and bias$\sub$with/=-0.04) and linoleic (bias$\sub$without/=0.25 and bias$\sub$with/=-0.20).

  • PDF