• Title/Summary/Keyword: 교정시스템

Search Result 640, Processing Time 0.035 seconds

Changes in Inorganic Element Concentrations of Drained Nutrient Solution and Leaves in Compliance with Numerical Increment of Fruiting Node during Hydroponic Cultivation of Cherry Tomato (방울토마토 수경재배 시 착과 절위 증가에 따른 공급액, 배액 및 식물체의 무기성분 농도 변화)

  • Lee, Eun Mo;Park, Sang Kyu;Kim, Gyoung Je;Lee, Bong Chun;Lee, Hee Chul;Yun, Yeo Uk;Park, Soo Bok;Choi, Jong Myoung
    • Journal of Bio-Environment Control
    • /
    • v.26 no.4
    • /
    • pp.361-367
    • /
    • 2017
  • Production cost as well as environmental contamination can be reduced by reuse of drained nutrient solution in hydroponic. This research was conducted to obtain the information in changes in inorganic elements concentration of supplied and drained nutrient solution as well as of plant leaves. To achieve the objective, the samples of supplied and drained solution and cherry tomato leaf tissues were periodically collected and analyzed during the hydroponic cultivation. The electrical conductivity (EC) of supplied and drained nutrient solution in early growth stage of cherry tomato were measured as around $2.0dS{\cdot}m^{-1}$, but those values move up with the passage of time reaching to $2.0dS{\cdot}m^{-1}$ at flowering stage of 9th fruiting node. The pHs of drained solution in early growth stage were 6.4 to 6.7, however those showed a tendency to get lowered to 5.9 to 6.1 as time passed during the crop cultivation. The concentration differences of $NO_3-N$, P, K, Ca, and Mg between supplied and drained solution were not distinctive until flowering stages of 4th fruiting nodes, while those in drained solution moved up after the stage. The tissue N contents of leaves decrease gradually and those of K and Ca increased as crops grew. However, Tissue P and Mg contents were maintained similarly from transplant to end-crop. The above results would be used in correction of drained nutrient solution when element compositions are varied compared to supplied solution in hydroponic cultivation of tomatoes.

Radiation Dose during Transmission Measurement in Whole Body PET/CT Scan (전신 PET/CT 영상 획득 시 투과 스캔에서의 방사선 선량)

  • Son Hye-Kyung;Lee Sang-Hoon;Nam So-Ra;Kim Hee-Joung
    • Progress in Medical Physics
    • /
    • v.17 no.2
    • /
    • pp.89-95
    • /
    • 2006
  • The purpose of this study was to evaluate the radiation doses during CT transmission scan by changing tube voltage and tube current, and to estimate the radiation dose during our clinical whole body $^{137}Cs$ transmission scan and high quality CT scan. Radiation doses were evaluated for Philips GEMINI 16 slices PET/CT system. Radiation dose was measured with standard CTDI head and body phantoms in a variety of CT tube voltage and tube current. A pencil ionization chamber with an active length of 100 mm and electrometer were used for radiation dose measurement. The measurement is carried out at the free-in-air, at the center, and at the periphery. The averaged absorbed dose was calculated by the weighted CTDI ($CTDI_w=1/3CTDI_{100,c}+2/3CTDI_{100,p}$) and then equivalent dose were calculated with $CTDI_w$. Specific organ dose was measured with our clinical whole body $^{137}Cs$ transmission scan and high quality CT scan using Alderson phantom and TLDs. The TLDs used for measurements were selected for an accuracy of ${\pm}5%$ and calibrated in 10 MeV X-ray radiation field. The organ or tissue was selected by the recommendations of ICRP 60. The radiation dose during CT scan is affected by the tube voltage and the tube current. The effective dose for $^{137}Cs$ transmission scan and high qualify CT scan are 0.14 mSv and 29.49 mSv, respectively. Radiation dose during transmission scan in the PET/CT system can measure using CTDI phantom with ionization chamber and anthropomorphic phantom with TLDs. further study need to be peformed to find optimal PET/CT acquisition protocols for reducing the patient exposure with same image qualify.

  • PDF

A Study on Accuracy and Usefulness of In-vivo Dosimetry in Proton Therapy (양성자 치료에서 생체 내 선량측정 검출기(In-vivo dosimety)의 정확성과 유용성에 관한 연구)

  • Kim, Sunyoung;Choi, Jaehyock;Won, Huisu;Hong, Joowan;Cho, Jaehwan;Lee, Sunyeob;Park, Cheolsoo
    • Journal of the Korean Society of Radiology
    • /
    • v.8 no.4
    • /
    • pp.171-180
    • /
    • 2014
  • In this study, the authors attempted to measure the skin dose by irradiating the actual dose on to the TLD(Thermo-Luminescence Dosimeter) and EBT3 Film used as the In-vivo dosimetry after planning the same treatment as the actual patient on a Phantom, because the erythema or dermatitis is frequently occurred on the patients' skin at the time of the proton therapy of medulloblastoma patient receiving the proton therapy. They intended to know whether there is the usefulness for the dosimetry of skin by the comparative analysis of the measured dose values with the treatment planned skin dose. The CT scan from the Brain to the Pelvis was done by placing a phantom on the CSI(Cranio-spinal irradiation) Set-up position of Medulloblastoma, and the treatment Isocenter point was aligned by using DIPS(Digital Image Positioning System) in the treatment room after planning a proton therapy. The treatment Isocenter point of 5 areas that the proton beam was entered into them, and Markers of 2 areas shown in the Phantom during CT scans, that is, in all 7 points, TLD and EBT3 Film pre-calibrated are alternatively attached, and the proton beam that the treatment was planned, was irradiated by 10 times, respectively. As a result of the comparative analysis of the average value calculated from the result values obtained by the repeated measurement of 10 times with the Skin Dose measured in the treatment planning system, the measured dose values of 6 points, except for one point that the accurate measurement was lacked due to the measurement position with a difficulty showed the distribution of the absolute dose value ${\pm}2%$ in both TLD and EBT Film. In conclusion, in this study, the clinical usefulness of the TLD and EBT3 Film for the Enterance skin dose measurement in the first proton therapy in Korea was confirmed.

Introduction of GOCI-II Atmospheric Correction Algorithm and Its Initial Validations (GOCI-II 대기보정 알고리즘의 소개 및 초기단계 검증 결과)

  • Ahn, Jae-Hyun;Kim, Kwang-Seok;Lee, Eun-Kyung;Bae, Su-Jung;Lee, Kyeong-Sang;Moon, Jeong-Eon;Han, Tai-Hyun;Park, Young-Je
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1259-1268
    • /
    • 2021
  • The 2nd Geostationary Ocean Color Imager (GOCI-II) is the successor to the Geostationary Ocean Color Imager (GOCI), which employs one near-ultraviolet wavelength (380 nm) and eight visible wavelengths(412, 443, 490, 510, 555, 620, 660, 680 nm) and three near-infrared wavelengths(709, 745, 865 nm) to observe the marine environment in Northeast Asia, including the Korean Peninsula. However, the multispectral radiance image observed at satellite altitude includes both the water-leaving radiance and the atmospheric path radiance. Therefore, the atmospheric correction process to estimate the water-leaving radiance without the path radiance is essential for analyzing the ocean environment. This manuscript describes the GOCI-II standard atmospheric correction algorithm and its initial phase validation. The GOCI-II atmospheric correction method is theoretically based on the previous GOCI atmospheric correction, then partially improved for turbid water with the GOCI-II's two additional bands, i.e., 620 and 709 nm. The match-up showed an acceptable result, with the mean absolute percentage errors are fall within 5% in blue bands. It is supposed that part of the deviation over case-II waters arose from a lack of near-infrared vicarious calibration. We expect the GOCI-II atmospheric correction algorithm to be improved and updated regularly to the GOCI-II data processing system through continuous calibration and validation activities.

Active Inferential Processing During Comprehension in Poor Readers (미숙 독자들에 있어 이해 도중의 능동적 추리의 처리)

  • Zoh Myeong-Han;Ahn Jeung-Chan
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.2
    • /
    • pp.75-102
    • /
    • 2006
  • Three experiments were conducted using a verification task to examine good and poor readers' generation of causal inferences(with because sentences) and contrastive inferences(with although sentences). The unfamiliar, critical verification statement was either explicitly mentioned or was implied. In Experiment 1, both good and poor readers responded accurately to the critical statement, suggesting that both groups had the linguistic knowledge necessary to the required inferences. Differences were found, however, in the groups' verification latencies. Poor, but not good, readers responded faster to explicit than to implicit verification statements for both because and although sentences. In Experiment 2, poor readers were induced to generate causal inferences for the because experimental sentences by including fillers that were apparently counterfactual unless a causal inference was made. In Experiment 3, poor readers were induced to generate contrastive inferences for the although sentences by including fillers that could only be resolved by making a contrastive inference. Verification latencies for the critical statements showed that poor readers made causal inferences in Experiment 2 and contrastive inferences in Experiment 3 doting comprehension. These results were discussed in terms of context effect: Specific encoding operations performed on anomaly backgrounded in another passage would form part of the context that guides the ongoing activity in processing potentially relevant subsequent text.

  • PDF

The Evaluation of the dose calculation algorithm(AAA)'s Accuracy in Case of a Radiation Therapy on Inhomogeneous tissues using FFF beam (FFF빔을 사용한 불균질부 방사선치료 시 선량계산 알고리즘(AAA)의 정확성 평가)

  • Kim, In Woo;Chae, Seung Hoon;Kim, Min Jung;Kim, Bo Gyoum;Kim, Chan Yong;Park, So Yeon;Yoo, Suk Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.321-327
    • /
    • 2014
  • Purpose : To verify the accuracy of the Ecilpse's dose calculation algorithm(AAA:Analytic anisotropic algorithm) in case of a radiation treatment on Inhomogeneous tissues using FFF beam comparing dose distribution at TPS with actual distribution. Materials and Methods : After acquiring CT images for radiation treatment by the location of tumors and sizes using the solid water phantoms, cork and chest tumor phantom made of paraffin, we established the treatment plan for 6MV photon therapy using our radiation treatment planning system for chest SABR, Ecilpse's AAA(Analytic anisotropic algorithm). According to the completed plan, using our TrueBeam STx(Varian medical system, Palo Alto, CA), we irradiated radiation on the chest tumor phantom on which EBT2 films are inserted and evaluated the dose value of the treatment plan and that of the actual phantom on Inhomogeneous tissue. Results : The difference of the dose value between TPS and measurement at the medial target is 1.28~2.7%, and, at the side of target including inhomogeneous tissues, the difference is 2.02%~7.40% at Ant, 4.46%~14.84% at Post, 0.98%~7.12% at Rt, 1.36%~4.08% at Lt, 2.38%~4.98% at Sup, and 0.94%~3.54% at Inf. Conclusion : In this study, we discovered the possibility of dose calculation's errors caused by FFF beam's characteristics and the inhomogeneous tissues when we do SBRT for inhomogeneous tissues. SBRT which is most popular therapy method needs high accuracy because it irradiates high dose radiation in small fraction. So, it is supposed that ideal treatment is possible if we minimize the errors when planning for treatment through more study about organ's characteristics like Inhomogeneous tissues and FFF beam's characteristics.

Evaluation of Radiation Exposure to Medical Staff except Nuclear Medicine Department (핵의학 검사 시행하는 환자에 의한 병원 종사자 피폭선량 평가)

  • Lim, Jung Jin;Kim, Ha Kyoon;Kim, Jong Pil;Jo, Sung Wook;Kim, Jin Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.2
    • /
    • pp.32-35
    • /
    • 2016
  • Purpose The goal for this study is to figure out that medical staff except Nuclear Medicine Department could be exposed to radiation from the patients who take Nuclear Medicine examination. Materials and Methods Total 250 patients (Bone scan 100, Myocardial SPECT 100, PET/CT 50) were involved from July to October in 2015, and we measured patient dose rate two times for every patients. First, we checked radiation dose rate right after injecting an isotope (radiopharmaceutical). Secondly, we measured radiation dose rate after each examination. Results In the case of Bone scan, dose rate were $0.0278{\pm}0.0036mSv/h$ after injection and $0.0060{\pm}0.0018mSv/h$ after examination (3 hrs 52 minutes after injection on average). For Myocardial SPECT, dose rate were $0.0245{\pm}0.0027mSv/h$ after injection and $0.0123{\pm}0.0041mSv/h$ after examination (2 hrs 09 minutes after injection on average). Lastly, for PET/CT, dose rate were $0.0439{\pm}0.0087mSv/h$ after examination (68 minutes after injection on average). Conclusion Compared to Nuclear Safety Commission Act, there was no significant harmful effect of the exposure from patients who have been administered radiopharmaceuticals. However, we should strive to keep ALARA(as low as reasonably achievable) principle for radiation protection.

  • PDF

Effect of Attenuation Correction, Scatter Correction and Resolution Recovery on Diagnostic Performance of Quantitative Myocardial SPECT for Coronary Artery Disease (감쇠보정, 산란보정 및 해상도복원이 정량적 심근 SPECT의 관상동맥질환 진단성능에 미치는 효과)

  • Hwang, Kyung-Hoon;Lee, Dong-Soo;Paeng, Jin-Chul;Lee, Myoung-Mook;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.5
    • /
    • pp.288-297
    • /
    • 2002
  • Purpose: Soft tissue attenuation and scattering are major methodological limitations of myocardial perfusion SPECT. To overcome these limitations, algorithms for attenuation, scatter correction and resolution recovery (ASCRR) is being developed, while quantitative myocardial SPECT has also become available. In this study, we investigated the efficacy of an ASCRR-corrected quantitative myocardial SPECT method for the diagnosis of coronary artery disease (CAD). Materials and Methods: Seventy-five patients (M:F=51:24, $61.0{\pm}8.9$ years old) suspected of CAD who underwent coronary angiography (CAG) within $7{\pm}12$ days of SPECT(Group-I) and 20 subjects (M:F=10:10, age $40.6{\pm}9.4$) with a low likelihood of coronary artery disease (Group-II) were enrolled. Tl-201 rest/ dipyridamole-stress Tc-99m-MIBI gated myocardial SPECT was performed. ASCRR correction was peformed using a Gd-153 line source and automatic software (Vantage-Pro; ADAC Labs, USA). Using a 20-segment model, segmental perfusion was automatically quantified on both the ASCRR-corrected and uncorrected images using an automatic quantifying software (AutoQUANT; ADAC Labs.). Using these quantified values, CAD was diagnosed in each of the 3 coronary arterial territories. The diagnostic performance of ASCRR-corrected SPECT was compared with that of non-corrected SPECT. Results: Among the 75 patients of Group-I, 9 patients had normal CAG while the remaining 66 patients had 155 arterial lesions; 61 left anterior descending (LAD), 48 left circumflex (LCX) and 46 right coronary (RCA) arterial lesions. For the LAD and LCX lesions, there was no significant difference in diagnostic performance. In Group-II patients, the overall normalcy rate improved but this improvement was not statistically significant (p=0.07). However, for RCA lesions, specificity improved significantly but sensitivity worsened significantly with ASCRR correction (both p<0.05). Overall accuracy was the same. Conclusion: The ASCRR correction did not improve diagnostic performance significantly although the diagnostic specificity for RCA lesions improved on quantitative myocardial SPECT. The clinical application of the ASC-RR correction requires more discretion regarding cost and efficacy.

Report about First Repeated Sectional Measurements of Water Property in the East Sea using Underwater Glider (수중글라이더를 활용한 동해 최초 연속 물성 단면 관측 보고)

  • GYUCHANG LIM;JONGJIN PARK
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.29 no.1
    • /
    • pp.56-76
    • /
    • 2024
  • We for the first time made a successful longest continuous sectional observation in the East Sea by an underwater glider during 95 days from September 18 to December 21 2020 in the Korea along the 106 Line (129.1 °E ~ 131.5 °E at 37.9 °N) of the regular shipboard measurements by the National Institute of Fishery Science (NIFS) and obtained twelve hydrographic sections with high spatiotemporal resolution. The glider was deployed at 129.1 °E in September 18 and conducted 88-days flight from September 19 to December 15 2020, yielding twelve hydrographic sections, and then recovered at 129.2 °E in December 21 after the last 6 days virtual mooring operation. During the total traveled distance of 2550 km, the estimated deviation from the predetermined zonal path had an average RMS distance of 262 m. Based on these high-resolution long-term glider measurements, we conducted a comparative study with the bi-monthly NIFS measurements in terms of spatial and temporal resolutions, and found distinguished features. One is that spatial features of sub-mesoscale such as sub-mesoscale frontal structure and intensified thermocline were detected only in the glider measurements, mainly due to glider's high spatial resolution. The other is the detection of intramonthly variations from the weekly time series of temperature and salinity, which were extracted from glider's continuous sections. Lastly, there were deviations and bias in measurements from both platforms. We argued these deviations in terms of the time scale of variation, the spatial scale of fixed-point observation, and the calibration status of CTD devices of both platforms.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.