• Title/Summary/Keyword: processing parameters

Search Result 2,726, Processing Time 0.035 seconds

A Study on the Tree Surgery Problem and Protection Measures in Monumental Old Trees (천연기념물 노거수 외과수술 문제점 및 보존 관리방안에 관한 연구)

  • Jung, Jong Soo
    • Korean Journal of Heritage: History & Science
    • /
    • v.42 no.1
    • /
    • pp.122-142
    • /
    • 2009
  • This study explored all domestic and international theories for maintenance and health enhancement of an old and big tree, and carried out the anatomical survey of the operation part of the tree toward he current status of domestic surgery and the perception survey of an expert group, and drew out following conclusion through the process of suggesting its reform plan. First, as a result of analyzing the correlation of the 67 subject trees with their ages, growth status. surroundings, it revealed that they were closely related to positional characteristic, damage size, whereas were little related to materials by fillers. Second, the size of the affected part was the most frequent at the bough sheared part under $0.09m^2$, and the hollow size by position(part) was the biggest at 'root + stem' starting from the behind of the main root and stem As a result of analyzing the correlation, the same result was elicited at the group with low correlation. Third, the problem was serious in charging the fillers (especially urethane) in the big hollow or exposed root produced at the behind of the root and stem part, or surface-processing it. The benefit by charging the hollow part was analyzed as not so much. Fourth, the surface-processing of fillers currently used (artificial bark) is mainly 'epoxy+woven fabric+cork', but it is not flexible, so it has brought forth problems of frequent cracks and cracked surface at the joint part with the treetextured part. Fifth, the correlation with the external status of the operated part was very high with the closeness, surface condition, formation of adhesive tissue and internal survey result. Sixth, the most influential thing on flushing by the wrong management of an old and big tree was banking, and a wrong pruning was the source of the ground part damage. In pruning a small bough can easily recover itself from its damage as its formation of adhesive tissue when it is cut by a standard method. Seventh, the parameters affecting the times of related business handling of an old and big tree are 'the need of the conscious reform of the manager and related business'. Eighth, a reform plan in an institutional aspect can include the arrangement of the law and organization of the old and big tree management and preservation at an institutional aspect. This study for preparing a reform plan through the status survey of the designated old and big tree, has a limit inducing a reform plan based on the status survey through individual research, and a weak point suggesting grounds by any statistical data. This can be complemented by subsequent studies.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

SINUS FLOOR GRAFTING USING CALCIUM PHOSPHATE NANO-CRYSTAL COATED XENOGENIC BONE AND AUTOLOGOUS BONE (칼슘포스페이트 나노-크리스탈이 코팅된 골이식재와 자가골을 병행 이용한 상악동 거상술)

  • Pang, Kang-Mi;Li, Bo-Han;Alrashidan, Mohamed;Yoo, Sang-Bae;Sung, Mi-Ae;Kim, Soung-Min;Jahng, Jeong-Won;Kim, Myung-Jin;Ko, Jea-Seung;Lee, Jong-Ho
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.31 no.3
    • /
    • pp.243-248
    • /
    • 2009
  • Purpose: Rehabilitation of the edentulous posterior maxilla with dental implants often poses difficulty because of insufficient bone volume caused by pneumatization of the maxillary sinus and by crestal bone resorption. Sinus grafting technique was developed to increase the vertical height to overcome this problem. The present study was designed to evaluate the sinus floor augmentation with anorganic bovine bone (Bio-$cera^{TM}$) using histomorphometric and clinical measures. Patients and methods: Thirteen patients were involved in this study and underwent total 14 sinus lift procedures. Residual bone height was ${\geq}2mm$ and ${\leq}6mm$. Lateral window approach was used, with grafting using Bio-$cera^{TM}$ only(n=1) or mixed with autogenous bone from ramus and/or maxillary tuberosity(n=13). After 6 months of healing, implant sites were created with 3mm diameter trephine and biopsies taken for histomorphometric analysis. The parameters assessed were area fraction of new bone, graft material and connective tissue. Immediate and 6 months after grafting surgery, and 6 months after implantation, computed tomography (CT) was taken and the sinus graft was evaluated morphometric analysis. After implant installation at the grafted area, the clinical outcome was checked. Results: Histomorphometry was done in ten patients.Bio-$cera^{TM}$ particles were surrounded by newly formed bone. The graft particles and newly formed bone were surrounded by connective tissue including small capillaries in some fields. Imaging processing revealed $24.86{\pm}7.59%$ of new bone, $38.20{\pm}13.19%$ connective tissue, and $36.92{\pm}14.51%$ of remaining Bio-$cera^{TM}$ particles. All grafted sites received an implant, and in all cases sufficient bone height was achieved to install implants. The increase in ridge height was about $15.9{\pm}1.8mm$ immediately after operation (from 13mm to 19mm). After 6 months operation, ridge height was reduced about $11.5{\pm}13.5%$. After implant installation, average marginal bone loss after 6 months was $0.3{\pm}0.15mm$. Conclusion: Bio-$cera^{TM}$ showed new bone formation similar with Bio-$Oss^{(R)}$ histomorphometrically and appeared to be an effective bone substitute in maxillary sinus augmentation procedure with the residual bone height from 2 to 6mm.

Comparative Study on the Methodology of Motor Vehicle Emission Calculation by Using Real-Time Traffic Volume in the Kangnam-Gu (자동차 대기오염물질 산정 방법론 설정에 관한 비교 연구 (강남구의 실시간 교통량 자료를 이용하여))

  • 박성규;김신도;이영인
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.4
    • /
    • pp.35-47
    • /
    • 2001
  • Traffic represents one of the largest sources of primary air pollutants in urban area. As a consequence. numerous abatement strategies are being pursued to decrease the ambient concentration of pollutants. A characteristic of most of the these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emission inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for vehicle types. The majority of inventories are compiled using passive data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. The study of current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this study, a methodology of motor vehicle emission calculation by using real-time traffic data was studied. A methodology for estimating emissions of CO at a test area in Seoul. Traffic data, which are required on a street-by-street basis, is obtained from induction loops of traffic control system. It was calculated speed-related mass of CO emission from traffic tail pipe of data from traffic system, and parameters are considered, volume, composition, average velocity, link length. And, the result was compared with that of a method of emission calculation by VKT(Vehicle Kilometer Travelled) of vehicles of category.

  • PDF

Physicochemical Characteristics of Fermented Pig Manure Compost and Cow Manure Compost by Pelletizing (펠렛 가공처리에 따른 돈분 발효퇴비와 우분 발효퇴비의 물리화학적 특성)

  • Jeong, Kwang Hwa;Park, Chi Ho;Choi, Dong Yun;Kwak, Jung Hoon;Yang, Chang Bum;Kang, Ho
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.13 no.4
    • /
    • pp.118-127
    • /
    • 2005
  • The best way to treat livestock manure is a recycling the manure to arable land as an organic fertilizer. In this study, fermented cow manure compost and pig manure compost were used as a raw materials for pelletizing. The changes of physicochemical properties of each composts and pellets were investigated. The aim of this research was to improve availability of livestock manure compost. In pelletizing process of fermented livestock manure compost, the optimal water content to make pellet was around 40%. When clay was mixed by volume more than 15% as a bonding agent, the condition of pelletizing process was beginning to improve. On a dry matter basis, the contents of N, P and K of fermented pig manure compost were 2.05%, 1.89% and 1.31%, respectively. After pelletizing, the contents of compost pelleted with the pig manure compost were 1.96% 1.73% and 0.89%, respectively. The same parameters of cow manure compost were 2.52%, 1.01% and 2.98%, respectively. After processing, the contents of compost pelleted with the cow manure compost were 2.45%, 1.10% and 2.93%, respectively. After pelletizing, there were little change in the content of heavy metals such as Pb, Cd, As and Hg. When pelleted compost dried naturally was submerged in water, it was completely dissolved in 30 minutes. On the other hand, Pelleted compost dried with the mechanical convection oven set $70^{\circ}C$ for 24 hours was completely dissolved in 960 minutes. The volume and weight of pelleted compost were decreased with time. After 30 days of storing, the weight of pelleted compost was decreased by 15% compared with its original weight. The volume of it was decreased by 17~25% in the same time.

  • PDF

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

Quality Changes of a Fully Ripe Korean Native Pumpkin, Yangsan, during Long-term Storage, and High Temperature and Pressure Treatment (장기저장 및 고온고압 처리에 따른 한국재래종 호박 '양산'의 품질변화)

  • Youn, Sun-Joo;Jeong, Byeong-Ryong;Kang, Sun-Chul
    • Applied Biological Chemistry
    • /
    • v.47 no.4
    • /
    • pp.409-413
    • /
    • 2004
  • We have studied quality changes of fully ripe fruit of Korean native pumpkin 'Yangsan' regarding the following parameters: pH, sugar content, weight, water content, contents of crude protein and amino acids during 60 days storage at room temperature. As the results, there was no changes in sugar contents according to the storage period, but pH was changing to a little acidic direction with slight decrease in weight and water content. Contents of total crude proteins and comprising amino acids were increased during the storage period. The main contents of amino acids of the Korean native pumpkin, Yangsan, were glutamic acid (15.5%), aspartic acid (10.1%), lysine (8.7%), valine (7.5%), leucine (7.1%) and alanine (6.6%), which were not highly influenced during storage period. Additionally we have investigated the content of free amino acids and color changes during processing of Yangsan under high temperature at $121^{\circ}C$ and high pressure at $1\;kg/cm^2$. In fully ripe fruits, a total of 29 kinds of free amino acids were detected including 8 kinds of essential amino acids (histidine, isoleucine, leucine, lysine, phenylalanine, methionine, threonine and valine). More than 35% of total free amino acids were aspartic acid (20.3%) and asparagine (15.4%); ornithine, citrullin, and arginine, which are related to Ornithine cycle, were also detected in fully ripe fruits. But when treated with high temperature and high pressure, glutamic acid and arginine were decreased rapidly whereas ammnonium chloride was relatively increased. Moreover "b" value as yellow color indicator was decreased from 17.45 to 9.14 while treated for 60 minutes with high temperature and pressure, caused by the degradation of ${\beta}-carotene$ and other yellowish pigments in Yangsan.

A Study for the Methodology of Analyzing the Operation Behavior of Thermal Energy Grids with Connecting Operation (열 에너지 그리드 연계운전의 운전 거동 특성 분석을 위한 방법론에 관한 연구)

  • Im, Yong Hoon;Lee, Jae Yong;Chung, Mo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.1 no.3
    • /
    • pp.143-150
    • /
    • 2012
  • A simulation methodology and corresponding program based on it is to be discussed for analyzing the effects of the networking operation of existing DHC system in connection with CHP system on-site. The practical simulation for arbitrary areas with various building compositions is carried out for the analysis of operational features in both systems, and the various aspects of thermal energy grids with connecting operation are highlighted through the detailed assessment of predicted results. The intrinsic operational features of CHP prime movers, gas engine, gas turbine etc., are effectively implemented by realizing the performance data, i.e. actual operation efficiency in the full and part loads range. For the sake of simplicity, a simple mathematical correlation model is proposed for simulating various aspects of change effectively on the existing DHC system side due to the connecting operation, instead of performing cycle simulations separately. The empirical correlations are developed using the hourly based annual operation data for a branch of the Korean District Heating Corporation (KDHC) and are implicit in relation between main operation parameters such as fuel consumption by use, heat and power production. In the simulation, a variety of system configurations are able to be considered according to any combination of the probable CHP prime-movers, absorption or turbo type cooling chillers of every kind and capacity. From the analysis of the thermal network operation simulations, it is found that the newly proposed methodology of mathematical correlation for modelling of the existing DHC system functions effectively in reflecting the operational variations due to thermal energy grids with connecting operation. The effects of intrinsic features of CHP prime-movers, e.g. the different ratio of heat and power production, various combinations of different types of chillers (i.e. absorption and turbo types) on the overall system operation are discussed in detail with the consideration of operation schemes and corresponding simulation algorithms.

A Study on the Effect of Using Sentiment Lexicon in Opinion Classification (오피니언 분류의 감성사전 활용효과에 대한 연구)

  • Kim, Seungwoo;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.133-148
    • /
    • 2014
  • Recently, with the advent of various information channels, the number of has continued to grow. The main cause of this phenomenon can be found in the significant increase of unstructured data, as the use of smart devices enables users to create data in the form of text, audio, images, and video. In various types of unstructured data, the user's opinion and a variety of information is clearly expressed in text data such as news, reports, papers, and various articles. Thus, active attempts have been made to create new value by analyzing these texts. The representative techniques used in text analysis are text mining and opinion mining. These share certain important characteristics; for example, they not only use text documents as input data, but also use many natural language processing techniques such as filtering and parsing. Therefore, opinion mining is usually recognized as a sub-concept of text mining, or, in many cases, the two terms are used interchangeably in the literature. Suppose that the purpose of a certain classification analysis is to predict a positive or negative opinion contained in some documents. If we focus on the classification process, the analysis can be regarded as a traditional text mining case. However, if we observe that the target of the analysis is a positive or negative opinion, the analysis can be regarded as a typical example of opinion mining. In other words, two methods (i.e., text mining and opinion mining) are available for opinion classification. Thus, in order to distinguish between the two, a precise definition of each method is needed. In this paper, we found that it is very difficult to distinguish between the two methods clearly with respect to the purpose of analysis and the type of results. We conclude that the most definitive criterion to distinguish text mining from opinion mining is whether an analysis utilizes any kind of sentiment lexicon. We first established two prediction models, one based on opinion mining and the other on text mining. Next, we compared the main processes used by the two prediction models. Finally, we compared their prediction accuracy. We then analyzed 2,000 movie reviews. The results revealed that the prediction model based on opinion mining showed higher average prediction accuracy compared to the text mining model. Moreover, in the lift chart generated by the opinion mining based model, the prediction accuracy for the documents with strong certainty was higher than that for the documents with weak certainty. Most of all, opinion mining has a meaningful advantage in that it can reduce learning time dramatically, because a sentiment lexicon generated once can be reused in a similar application domain. Additionally, the classification results can be clearly explained by using a sentiment lexicon. This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of movie reviews. Additionally, various parameters in the parsing and filtering steps of the text mining may have affected the accuracy of the prediction models. However, this research contributes a performance and comparison of text mining analysis and opinion mining analysis for opinion classification. In future research, a more precise evaluation of the two methods should be made through intensive experiments.

A Dynamic Prefetch Filtering Schemes to Enhance Usefulness Of Cache Memory (캐시 메모리의 유용성을 높이는 동적 선인출 필터링 기법)

  • Chon Young-Suk;Lee Byung-Kwon;Lee Chun-Hee;Kim Suk-Il;Jeon Joong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.123-136
    • /
    • 2006
  • The prefetching technique is an effective way to reduce the latency caused memory access. However, excessively aggressive prefetch not only leads to cache pollution so as to cancel out the benefits of prefetch but also increase bus traffic leading to overall performance degradation. In this thesis, a prefetch filtering scheme is proposed which dynamically decides whether to commence prefetching by referring a filtering table to reduce the cache pollution due to unnecessary prefetches In this thesis, First, prefetch hashing table 1bitSC filtering scheme(PHT1bSC) has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete block address table filtering scheme(CBAT) has been introduced to be used as a reference for the comparative study. A prefetch block address lookup table scheme(PBALT) has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the PHT1bSC scheme, the contents of each entry have the fields the same as CBAT scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. On commonly used prefetch schemes and general benchmarks and multimedia programs simulates change cache parameters. The PBALT scheme compared with no filtering has shown enhanced the greatest 22%, the cache miss ratio has been decreased by 7.9% by virtue of enhanced filtering accuracy compared with conventional PHT2bSC. The MADT of the proposed PBALT scheme has been decreased by 6.1% compared with conventional schemes to reduce the total execution time.