• Title/Summary/Keyword: 시간복잡도

Search Result 3,659, Processing Time 0.037 seconds

Dosimetric Effect on Selectable Optimization Parameters of Volumatric Modulated Arc Therapy (선택적 최적화 변수(Selectable Optimization Parameters)에 따른 부피적조절회전방사선치료(VMAT)의 선량학적 영향)

  • Jung, Jae-Yong;Shin, Yong-Joo;Sohn, Seung-Chang;Kim, Yeon-Rae;Min, Jung-Wan;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.23 no.1
    • /
    • pp.15-25
    • /
    • 2012
  • The aim of this study is to evaluate plan quality and dose accuracy for Volumetric Modulated Arc Therapy (VMAT) on the TG-119 and is to investigate the effects on variation of the selectable optimization parameters of VMAT. VMAT treatment planning was implemented on a Varian iX linear accelerator with ARIA record and verify system (Varian Mecical System Palo Alto, CA) and Oncentra MasterPlan treatment planning system (Nucletron BV, Veenendaal, Netherlands). Plan quality and dosimetric accuracy were evaluated by effect of varying a number of arc, gantry spacing and delivery time for the test geometries provided in TG-119. Plan quality for the target and OAR was evaluated by the mean value and the standard deviation of the Dose Volume Histograms (DVHs). The ionization chamber and $Delta^{4PT}$ bi-planar diode array were used for the dose evaluation. For treatment planning evaluation, all structure sets closed to the goals in the case of single arc, except for the C-shape (hard), and all structure sets achieved the goals in the case of dual arc, except for C-shape (hard). For the variation of a number of arc, the simple structure such as a prostate did not have the difference between single arc and dual arc, whereas the complex structure such as a head and neck showed a superior result in the case of dual arc. The dose distribution with gantry spacing of $4^{\circ}$ was shown better plan quality than the gantry spacing of $6^{\circ}$, but was similar results compared with gantry spacing of $2^{\circ}$. For the verification of dose accuracy with single arc and dual arc, the mean value of a relative error between measured and calculated value were within 3% and 4% for point dose and confidence limit values, respectively. For the verification on dose accuracy with the gantry intervals of $2^{\circ}$, $4^{\circ}$ and $6^{\circ}$, the mean values of relative error were within 3% and 5% for point dose and confidence limit values, respectively. In the verification of dose distribution with $Delta^{4PT}$ bi-planar diode array, gamma passing rate was $98.72{\pm}1.52%$ and $98.3{\pm}1.5%$ for single arc and dual arc, respectively. The confidence limit values were within 4%. The smaller the gantry spacing, the more accuracy results were shown. In this study, we performed the VMAT QA based on TG-119 procedure, and demonstrated that all structure sets were satisfied with acceptance criteria. And also, the results for the selective optimization variables informed the importance of selection for the suitable variables according to the clinical cases.

The Effect of Dexamethasone on Airway Goblet Cell Hyperplasia and Inflammation in $TiO_2$-Treated Sprague-Dawley Rats ($TiO_2$로 처치된 백서에서 기도내 배상세포 증식과 염증에 대한 Dexamethasone의 효과)

  • Lim, Gune-Il;Kim, Do-Jin;Park, Choon-Sik
    • Tuberculosis and Respiratory Diseases
    • /
    • v.49 no.1
    • /
    • pp.37-48
    • /
    • 2000
  • Backgrounds : The pathophysiology of chronic airflow obstruction, such as bronchial asthma, is characterized by mucus hypersecretion, goblet cell hyperplasia(GCH), smooth muscle hypertrophy, and inflammatory cells infiltration. In fatal asthma patients, one distinct findings is mucus hypersecretion due to GCH. However, the mechanisms of GCH in these hypersecretory diseases remain still unknown. In this study, a rat model was rapidly induced with GCH by instillation of $TiO_2$, intratracheally. We intend to confirm GCH and association of concomitant inflammatory cells infiltration and to observe the effect of potent antiinflammatory agent, that is dexamethasone, on GCH with inflammatory cells. Methods : Twenty-one 8-weeks-old male Sprague-Dawley rats were divided into three groups. Endotoxinfree water was instilled intratracheally in group 1(control) ; $TiO_2$, was instilled in the group 2 ; and dexamethasone was injected intraperitoneally to group 3 before $TiO_2$ instillation. After 120 hours, all rats were sacrificed, and trachea, bronchi, and lungs were resected respectively. These tissues were made as paraffin blocks and stained as PAS for goblet cells and Luna stain for eosinophils. We calculated the ratio of goblet cell to respiratory epithelium and number of infiltrated eosinophils from each tissue. Results : (1) Fraction of goblet cells was significantly increased in group 2 than in group 1 in the trachea and in the main bronchus. (10.19$\pm$11.33% vs 4.09$\pm$8.28%, p<0.01 and 34.09$\pm$23.91% vs 3.61$\pm$4.84%, p<0.01, respectively). (2) Eosinophils were significantly increased in the airway of group 2 than that of group 1. (5.43$\pm$3.84% vs 0.17$\pm$0.47 in trachea and 47.71$\pm$16.91 vs 2.71$\pm$1.96 in main bronchi). (3) There was a positive correlation between goblet cells and eosinophils(r=0.719, p=0.001). (4) There was significant difference in the decrease of goblet cells after dexamethasone injection between group 2 and group 3 (p<0.01). Also, infiltration of eosinophils was suppressed by dexamethasone. Conclusion : We made an animal model of $TiO_2$-induced goblet cell hyperplasia. GCH was observed mainly in the main bronchi with concomitant eosinophilic infiltration. Both goblet cell hyperplasia and eosinophilic infiltration were suppressed by dexamethasone. This animal model may serve as a useful tool in understanding of the mechanism of GCH in chronic airway diseases.

  • PDF

A Clinical Study of Corrosive Esophagitis (식도부식증에 대한 임상적 고찰)

  • 조진규;차창일;조중생;최춘기
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1981.05a
    • /
    • pp.7-8
    • /
    • 1981
  • Authors observed clinically 34 cases of the corrosive esophagitis caused by various corrosive agents at Kyung Hee University Hospital from Aug. 1978 to Dec. 1980. The results obtained were as follows; 1. Among the 34 patients, male was 19 (55.9%) and female 15(44.1%). Most frequently found age was 3rd decade. 2. 18 cases(52.9%) came to the hospital within 24 hours after ingestion of the agents, and 13 cases(38.2%) within 2 to 7 days. 3. Seasonal distribution showed most frequently in spring(35.3%). 4. The moment of the accident was suicidal attempt in 27 cases(79.4%) and misdrinking in 7 cases(20.6%). 5. Acetic acid was a most commonly used agent, showing 23 cases(67.6%), lye and insecticides were next in order. 6. Common chief complaints were swallowing difficulty and sore throat. 7. The average hospital days was 14.8 days. 8. Esophagogram was performed between 3 to 7 days after ingestion in 13 cases(38.2 %), findings were constrictions on the 1st narrowing portion in 4 cases(30.8%) and within normal limits in 3 cases(23.1%). 9. Esophagoscopy was performed in 31 cases(91.2%) between 2 to 7 days after ingestion, which revealed edema and coating on entrance of the esophagus in 9 cases (29.0 %). Diffuse edema on entire length of the esophagus and within normal limits were next in order. 10. Laboratory results were as follows: Anemia was in 1 cases(2.9%), leukocytosis. in 21 cases (61.8%), increase ESR in 9 cases (26.5%), markedly increased BUN and creatinine in 3 cases (8.8%), and hypokalemia in 1 cases(2.9%). Proteinuria in 10 cases(29.4%) hematuria in 4 cases(l1.8%), and coca cola urine in 3 cases (8.8%). 11. Associated diseases were 3 cases(8.8%) of cancer, 1 cases (2.9%) of diabetes mellitus, and 1 cases(2.9%) of manic depressive illness. 12. Various treatment was given: Esophageal and gastric washing in 23 cases(67.6%) for the emergent treatment, antibiotics in 32 cases(94.1%), steroids in 30 cases(88.2%), bougienation in 5 cases(14.7%), hemodialysis in 1 case(2.9%), and partial esophagectomy with gastrostomy and gastroileal anastomosis in 1 cases(2.9%). 13. Serious complications were observed in 9 cases (26.5%), consisted of 6 cases(17.6%) of esophageal stricture, 1 cases(2.9%), of aute renal failure, 1 cases (2.9%) of pneu momediastinum with pneumonia, and 1 cases (2.9%) of pneumonia.

  • PDF

Usefulness of Troponin-I, Lactate, C-reactive protein as a Prognostic Markers in Critically Ill Non-cardiac Patients (비 순환기계 중환자의 예후 인자로서의 Troponin-I, Lactate, C-reactive protein의 유용성)

  • Cho, Yu Ji;Ham, Hyeon Seok;Kim, Hwi Jong;Kim, Ho Cheol;Lee, Jong Deok;Hwang, Young Sil
    • Tuberculosis and Respiratory Diseases
    • /
    • v.58 no.6
    • /
    • pp.562-569
    • /
    • 2005
  • Background : The severity scoring system is useful for predicting the outcome of critically ill patients. However, the system is quite complicated and cost-ineffective. Simple serologic markers have been proposed to predict the outcome, which include troponin-I, lactate and C-reactive protein(CRP). The aim of this study was to evaluate the prognostic values of troponin-I, lactate and CRP in critically ill non-cardiac patients. Methods : From September 2003 to June 2004, 139 patients(Age: $63.3{\pm}14.7$, M:F = 88:51), who were admitted to the MICU with non-cardiac critical illness at Gyeongsang National University Hospital, were enrolled in this study. This study evaluated the severity of the illness and the multi-organ failure score (Acute Physiologic and Chronic Health EvaluationII, Simplified Acute Physiologic ScoreII and Sequential Organ Failure Assessment) and measured the troponin-I, lactate and CRP within 24 hours after admission in the MICU. Each value in the survivors and non-survivors was compared at the 10th and 30th day after ICU admission. The mortality rate was compared at 10th and 30th day in normal and abnormal group. In addition, the correlations between each value and the severity score were assessed. Results : There were significantly higher troponin-I and CRP levels, not lactate, in the non-survivors than in the survivors at 10th day($1.018{\pm}2.58ng/ml$, $98.48{\pm}69.24mg/L$ vs. $4.208{\pm}10.23ng/ml$, $137.69{\pm}70.18mg/L$) (p<0.05). There were significantly higher troponin-I, lactate and CRP levels in the non-survivors than in the survivors on the 30th day ($0.99{\pm}2.66ng/ml$, $8.02{\pm}9.54ng/dl$, $96.87{\pm}68.83mg/L$ vs. $3.36{\pm}8.74ng/ml$, $15.42{\pm}20.57ng/dl$, $131.28{\pm}71.23mg/L$) (p<0.05). The mortality rate was significantly higher in the abnormal group of troponin-I, lactate and CRP than in the normal group of troponin-I, lactate and CRP at 10th day(28.1%, 31.6%, 18.9% vs. 11.0%, 15.8 %, 0%) and 30th day(38.6%, 47.4%, 25.8% vs. 15.9%, 21.7%, 14.3%) (p<0.05). Troponin-I and lactate were significantly correlated with the SAPS II score($r^2=0.254$, 0.365, p<0.05). Conclusion : Measuring the troponin-I, lactate and CRP levels upon admission may be useful for predicting the outcome of critically ill non-cardiac patients.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Comparative analysis of Glomerular Filtration Rate measurement and estimated glomerular filtration rate using 99mTc-DTPA in kidney transplant donors. (신장이식 공여자에서 99mTc-DTPA를 이용한 Glomerular Filtration Rate 측정과 추정사구체여과율의 비교분석)

  • Cheon, Jun Hong;Yoo, Nam Ho;Lee, Sun Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.2
    • /
    • pp.35-40
    • /
    • 2021
  • Purpose Glomerular filtration rate(GFR) is an important indicator for the diagnosis, treatment, and follow-up of kidney disease and is also used by healthy individuals for drug use and evaluating kidney function in donors. The gold standard method of the GFR test is to measure by continuously injecting the inulin which is extrinsic marker, but it takes a long time and the test method is complicated. so, the method of measuring the serum concentration of creatinine is used. Estimated glomerular filtration rate (eGFR) is used instead. However, creatinine is known to be affected by age, gender, muscle mass, etc. eGFR formulas that are currently used include the Cockroft-Gault formula, the modification of diet in renal disease (MDRD) formula, and the chronic kidney disease epidemilogy collaboration (CKD-EPI) formula for adults. For children, the Schwartz formula is used. Measurement of GFR using 51Cr-EDTA (diethylenetriamine tetraacetic acid), 99mTc-DTPA (diethylenetriamine pentaacetic acid) can replace inulin and is currently in use. Therefore, We compared the GFR measured using 99mTc-DTPA with the eGFR using CKD-EPI formula. Materials and Methods For 200 kidney transplant donors who visited Asan medical center.(96 males, 104 females, 47.3 years ± 12.7 years old) GFR was measured using plasma(Two-plasma-sample-method, TPSM) obtained by intravenous administration of 99mTc-DTPA(0.5mCi, 18.5 MBq). eGFR was derived using CKD-EPI formula based on serum creatinine concentration. Results GFR average measured using 99mTc-DTPA for 200 kidney transplant donors is 97.27±19.46(ml/min/1.73m2), and the eGFR average value using the CKD-EPI formula is 96.84±17.74(ml/min/1.73m2), The concentration of serum creatinine is 0.84±0.39(mg/dL). Regression formula of 99mTc-DTPA GFR for serum creatinine-based eGFR was Y = 0.5073X + 48.186, and the correlation coefficient was 0.698 (P<0.01). Difference (%) was 1.52±18.28. Conclusion The correlation coefficient between the 99mTc-DTPA and the eGFR derived on serum creatinine concentration was confirmed to be moderate. This is estimated that eGFR is affected by external factors such as age, gender, and muscle mass and use of formulas made for kidney disease patients. By using 99mTc-DTPA, we can provide reliable GFR results, which is used for diagnosis, treatment and observation of kidney disease, and kidney evaluation of kidney transplant patients.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

An Anatomical Study of the Posterior Tympanum (한국인 중이강후벽에 관한 형태해부학적 고찰)

  • 양오규;윤강묵;심상열;김영명
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1982.05a
    • /
    • pp.17.2-19
    • /
    • 1982
  • The sinus tympani is subject to great variability in the size, shape and posterior extent. A heavy compact bony zone, especially in the posterior portion and the narrow space between the facial nerve and posterior semicircular canal are the limitation of surgical approach. The facial recess should be opened, creating a wide connection between the mesotympanum and mastoid in the Intact canal wall tympanoplasty with mastoidectomy. The surgically created limits of the facial recess are the facial nerve medially, the chorda tympani laterally and the bone adjacent to the incus superiorly. Using adult Korean's thirty-five temporal bones, the authors measured the osteologic reslationship in the posterior tympanum, especially sinus tympani and facial recess. The result was as followed. 1. The average distance from the anterior end of the pyramidal eminence. 1) to the edge of the sinus tympani directly posterior was 2.54(1.05-5.40)mm. 2) to the maximum posterior extent was 3.22(1.25-7.45)mm. 3) to the maximum cephaled extent was 0.67 (0.40-1.75)mm. 2. The boundary of the sinus tympani was 82.9% from the lower margin oval window to the upper margin round window niche. 3. The deepest part of the sinus tympani was 62.9% in the mid portion, between the ponticulus and subiculum. 4. The oblique dimension from the fossa incudis above to the hypotympanum below was 8.13(7.90-9.55)mm. 5. The transverse dimensions midway between the oval window above and round window below was 3.00(2.85-3.45)mm. 6. The transverse dimension at the level of the fossa incudis was 1.81(1.40-2.15)mm. 7. The facial nerve dehiscence was 14.3%. 8. Anterior-posterior diameter of the footplate was 2.98(2.85-3.05) mm. 9. The average distance from the footplate. 1) to the cochleariform process was 1.42(1.35-1.55) mm. 2) to the round window niche was 1.85(1.45-2.10) mm.

  • PDF

과학자(科學者)의 정보생산(情報生産) 계속성(繼續性)과 정보유통(情報流通)(2)

  • Garvey, W.D.
    • Journal of Information Management
    • /
    • v.6 no.5
    • /
    • pp.131-134
    • /
    • 1973
  • 본고(本稿)시리이즈의 제1보(第一報)에서 우리는 물리(物理), 사회과학(社會科學) 및 공학분야(工學分野)의 12,442명(名)의 과학자(科學者)와 기술자(技術者)에 대한 정보교환활동(情報交換活動)의 78례(例)에 있어서 일반과정(一般過程)과 몇 가지 결과(結果)를 기술(記述)한 바 있다. 4년반(年半) 이상(以上)의 기간(其間)($1966{\sim}1971$)에서 수행(遂行)된 이 연구(硏究)는 현재(現在)의 과학지식(科學知識)의 집성체(集成體)로 과학자(科學者)들이 연구(硏究)를 시작(始作)한 때부터 기록상(記錄上)으로 연구결과(硏究結果)가 취합(聚合)될 때까지 각종(各種) 정형(定形), 비정형(非定形) 매체(媒體)를 통한 유통정보(流通情報)의 전파(傳播)와 동화(同化)에 대한 포괄적(包括的)인 도식(圖式)으로 표시(表示)할 수 있도록 설정(設定)하고 또 시행(施行)되었다. 2보(二報), 3보(三報), 4보(四報)에서는 데이터 뱅크에 수집(蒐集) 및 축적(蓄積)된 데이터의 일반적(一般的)인 기술(記述)을 적시(摘示)하였다. (1) 과학(科學)과 기술(技術)의 정보유통(情報流通)에 있어서 국가적(國家的) 회합(會合)의 역할(役割)(Garvey; 4보(報)) 국가적(國家的) 회합(會合)은 투고(投稿)와 이로 인한 잡지중(雜誌中) 게재간(揭載間)의 상대적(相對的)인 오랜 기간(期間)동안 이러한 연구(硏究)가 공개매체(公開媒體)로 인하여 일시적(一時的)이나마 게재여부(揭載如否)의 불명료성(不明瞭性)을 초래(招來)하기 전(前)에 과학연구(科學硏究)의 초기전파(初期傳播)를 위하여 먼저 행한 주요(主要) 사례(事例)와 마지막의 비정형매체(非定形媒體)의 양자(兩者)를 항상 조직화(組織化)하여 주는 전체적(全體的)인 유통과정(流通過程)에 있어서 명확(明確)하고도 중요(重要)한 기능(機能)을 갖는다는 것을 알 수 있었다. (2) 잡지(雜誌)에 게재(揭載)된 정보(情報)의 생산(生産)과 관련(關聯)되는 정보(情報)의 전파과정(傳播過程)(Garvey; 1보(報)). 이 연구(硏究)를 위해서 우리는 정보유통과정(情報流通過程)을 따라 많은 노력(努力)을 하였는데, 여기서 유통과정(流通過程)의 인상적(印象的)인 면목(面目)은 특별(特別)히 연구(硏究)로부터의 정보(情報)는 잡지(雜誌)에 게재(揭載)되기까지 진정으로는 공개적(公開的)이 못된다는 것과 이러한 사실(事實)은 선진연구(先進硏究)가 자주 시대(時代)에 뒤떨어지게 된다는 것을 발견할 수 있었다. 경험(經驗)이 많은 정보(情報)의 수요자(需要者)는 이러한 폐물화(廢物化)에 매우 민감(敏感)하며 자기(自己) 연구(硏究)에 당면한, 진행중(進行中)이거나 최근(最近) 완성(完成)된 연구(硏究)에 대하여 정보(情報)를 얻기 위한 모든 수단(手段)을 발견(發見)코자 하였다. 예를 들어, 이들은 잡지(雜誌)에 보문(報文)을 발표(發表)하기 전(前)에 발생(發生)하는 정보전파과정(情報傳播過程)을 통하여 유루(遺漏)될지도 모르는 정보(情報)를 얻기 위하여 한 잡지(雜誌)나 2차자료(二次資料) 또는 전형적(典型的)으로 이용(利用)되는 다른 잡지류중(雜誌類中)에서 당해정보(當該情報)가 발견(發見)되기를 기다리지 않는다는 것이다. (3) "정보생산 과학자(情報生産 科學者)"에 의한 정보전파(情報傳播)의 계속성(繼續性)(이 연구(硏究) 시리이즈의 결과(結果)는 본고(本稿)의 주내용(主內容)으로 되어 있다.) 1968/1969년(年)부터 1970/1971년(年)의 이년기간(二年期間)동안 보문(報文)을 낸 과학자(科學者)(1968/1969년(年) 잡지중(雜誌中)에 "질이 높은" 보문(報文)을 발표(發表)한)의 약 2/3는 1968/1969의 보문(報文)과 동일(同一)한 대상영역(對象領域)의 연구(硏究)를 계속(繼續) 수행(遂行)하였다. 그래서 우리는 본연구(本硏究)에 오른 대부분(大部分)의 저자(著者)가 정상적(正常的)인 과학(科學), 즉 연구수행중(硏究遂行中) 의문(疑問)에 대한 완전(完全)한 해답(解答)을 얻게 되는 가장 중요(重要)한 추구(追求)로서 Kuhn(제5보(第5報))에 의하여 기술(技術)된 방법(방법)으로 과학(연구)(科學(硏究))을 실행(實行)하였음을 알았다. 최근(最近)에 연구(硏究)를 마치고 그 결과(結果)를 보문(報文)으로서 발표(發表)한 이들 과학자(科學者)들은 다음 단계(段階)로 해야 할 사항(事項)에 대하여 선행(先行)된 동일견해(同一見解)를 가진 다른 연구자(硏究자)들의 연구(硏究)와 대상(對象)에 밀접(密接)하게 관련(關聯)되고 있다. 이 계속성(繼續性)의 효과(效果)에 대한 지표(指標)는 보문(報文)과 동일(同一)한 영역(領域)에서 연구(硏究)를 계속(繼續)한 저자(著者)들의 약 3/4은 선행(先行) 보문(報文)에 기술(技術)된 연구결과(硏究結果)에서 직접적(直接的)으로 새로운 연구(硏究)가 유도(誘導)되었음을 보고(報告)한 사항(事項)에 반영(反映)되어 있다. 그렇지만 우리들의 데이터는 다음 영역(領域)으로 기대(期待)하지 않은 전환(轉換)을 일으킬 수도 있음을 보여주고 있다. 동일(同一) 대상(對象)에서 연구(硏究)를 속행(續行)하였던 저자(著者)들의 1/5 이상(以上)은 뒤에 새로운 영역(領域)으로 연구(硏究)를 전환(轉換)하였고 또한 이 영역(領域)에서 연구(硏究)를 계속(繼續)하였다. 연구영역(硏究領域)의 이러한 변화(變化)는 연구자(硏究者)의 일반(一般) 정보유통(情報流通) 패턴에 크게 변화(變化)를 보이지는 않는다. 즉 새로운 지적(知的) 문제(問題)에 대한 변화(變化)에서 야기(惹起)되는 패턴에 있어서 저자(著者)들은 오래된 문제(問題)의 방법(方法)과 기술(技術)을 새로운 문제(問題)로 맞추려 한다. 과학사(科學史)의 최근(最近) 해석(解釋)(Hanson: 6보(報))에서 예기(豫期)되었던 바와 같이 정상적(正常的)인 과학(科學)의 계속성(繼續性)은 항상 절대적(絶對的)이 아니며 "과학지식(科學知識)"의 첫발자욱은 예전 연구영역(硏究領域)의 대상(對象)에 관계(關係)없이 나타나는 다른 영역(領域)으로 내딛게 될지도 모른다. 우리들의 연구(硏究)에서 저자(著者)의 1/3은 동일(同一) 영역(領域)의 대상(對象)에서 속계적(續繼的)인 연구(硏究)를 수행(遂行)치 않고 새로운 영역(領域)으로 옮아갔다. 우리는 이와 같은 데이터를 (a) 저자(著者)가 각개과학자(各個科學者)의 활동(活動)을 통하여 집중적(集中的)인 과학적(科學的) 노력(努力)을 시험(試驗)할 때 각자(各自)의 연구(硏究)에 대한 많은 양(量)의 계속성(繼續性)이 어떤 진보중(進步中)의 과학분야(科學分野)에서도 나타난다는 것과 (b) 이 계속성(繼續性)은 과학(科學)에 대한 집중적(集中的) 진보(進步)의 필요적(必要的) 특질(特質)이라는 것을 의미한다. 또한 우리는 이 계속성(繼續性)과 관련(關聯)되는 유통문제(流通問題)라는 새로운 대상영역(對象領域)으로 전환(轉換)할 때 연구(硏究)의 각단계(各段階)의 진보(進步)와 새로운 목적(目的)으로 전환시(轉換時) 양자(兩者)가 다 필요(必要)로 하는 각개(各個) 과학자(科學者)의 정보수요(情報需要)를 위한 시간(時間) 소비(消費)라는 것을 탐지(探知)할 수 있다. 이러한 관찰(觀察)은 정보(情報)의 선택제공(選擇提供)시스팀이 현재(現在) 필요(必要)로 하는 정보(情報)의 만족(滿足)을 위하여는 효과적(效果的)으로 매우 융통성(融通性)을 띠어야 한다는 것을 암시(暗示)하는 것이다. 본고(本稿)의 시리이즈에 기술(記述)된 전정보유통(全情報流通) 과정(過程)의 재검토(再檢討) 결과(結果)는 과학자(科學者)들이 항상 그들의 요구(要求)를 조화(調和)시키는 신축성(伸縮性)있는 유통체제(流通體制)를 발전(發展)시켜 왔다는 것을 시사(示唆)해 주고 있다. 이 시스팀은 정보전파(情報傳播) 사항(事項)을 중심(中心)으로 이루어 지며 또한 이 사항(事項)의 대부분(大部分)의 참여자(參與者)는 자기자신(自己自身)이 과학정보(科學情報) 전파자(傳播者)라는 기본적(基本的)인 정보전파체제(情報傳播體制)인 것이다. 그러나 이 과정(過程)의 유통행위(流通行爲)에서 살펴본 바와 같이 우리는 대부분(大部分)의 정보전파자(情報傳播者)가 역시 정보(情報)의 동화자(同化者)-다시 말해서 과학정보(科學情報)의 생산자(生産者)는 정보(情報)의 이용자(利用者)라는 것을 알 수 있다. 이 연구(硏究)에서 전형적(典型的)인 과학자((科學者)는 과학정보(科學情報)의 생산(生産)이나 전파(傳播)의 양자(兩者)에 연속적(連續的)으로 관계(關係)하고 있음을 보았다. 만일(萬一) 연구자(硏究者)가 한 편(編)의 연구(硏究)를 완료(完了)한다면 이 연구자(硏究者)는 다음에 무엇을 할 것이냐 하는 관념(觀念)을 갖게 되고 따라서 "완료(完了)된" 연구(硏究)에 관한 정보(情報)를 이용(利用)하여 동시(同時)에 새로운 일을 시작(始作)하게 된다. 예를 들어, 한 과학자(科學者)가 동일(同一) 영역(領域)의 다른 동료연구자(同僚硏究者)에게 완전(完全)하며 이의(異議)에 방어(防禦)할 수 있는 보고서(報告書)를 제공(提供)할 수 있는 단계(段階)에 도달(到達)하였다면 우리는 이 과학자(科學者)가 정보유통과정(情報流通過程)에서 많은 역할(役割)을 해낼 수 있다는 것을 알 것이다. 즉 이 과학자(科學者)는 다른 과학자(科學者)들에게 최신(最新)의 과학적(科學的) 결과(結果)를 제공(提供)할 때 하나의 과학정보(科學情報) 전파자(傳播者)가 되며, 이 연구(硏究)의 의의(意義)와 타당성(妥當性)에 관한 논평(論評)이나 비평(批評)을 동료(同僚)로부터 구(求)하는 관점(觀點)에서 보면 이 과학자(科學者)는 하나의 정보탐색자(情報探索者)가 된다. 또한 장래(將來)의 이용(利用)을 위하여 증정(贈呈)이나 동화(同化)한 이 정보(情報)로부터 피이드백을 받아 드렸을 때의 범주(範疇)에서 보면 (잡지(雜誌)에 투고(投稿)하기 위하여 원고(原稿)를 작성(作成)하는 경우에 있어서와 같이) 과학자(科學者)는 하나의 정보이용자(情報利用者)가 되고 이러한 모든 가능성(可能性)에서 정보생산자(情報生産者)는 다음 정보생산(情報生産)에 이미 들어가 있다고 볼 수 있다(저자(著者)들의 2/3는 보문(報文)이 게재(揭載)되기 전(前)에 이미 새로운 연구(硏究)를 시작(始作)하였다). 과학자(科學者)가 자기연구(自己硏究)를 마치고 예비보고서(豫備報告書)를 만든 후(後) 자기연구(自己硏究)에 관한 정보(情報)의 전파(傳播)를 계속하게 되는데 이와 관계(關係)되는 일반적(一般的)인 패턴을 보면 소수(少數)의 동료(同僚)그룹에 출석(出席)하는 경우 (예로 지역집담회)(地域集談會))와 대중(大衆) 앞에서 행(行)하는 경우(예로 국가적 회합(國家的 會合)) 등이 있다. 그러는 동안에 다양성(多樣性) 있는 성문보고서(成文報告書)가 이루어진다. 그러나 과학자(科學者)들이 자기연구(自己硏究)를 위한 주정보전파목표(主情報傳播目標)는 과학잡지중(科學雜誌中)에 게재(揭載)되는 보문(報文)이라는 것이 명확(明確)한 사실(事實)인 것이다. 이러한 목표(目標)에 도달(到達)할 때까지의 각(各) 정보전파단계(情報傳播段階)에서 과학자(科學者)들은 목표달성(目標達成)을 위하여 청중(聽衆), 자기동화(自己同化)된 정보(情報) 및 이미 이용(利用)된 정보(情報)로부터 피이드백을 탐색(探索)하게 된다. 우리가 본고(本稿)의 시리이즈중(中)에 표현(表現)하려 했던 바와 같이 이러한 활동(活動)은 조사수임자(調査受任者)의 의견(意見)이 원고(原稿)에 반영(反映)되고 또 그 원고(原稿)가 잡지게재(雜誌揭載)를 위해 수리(受理)될 때까지 계속적(繼續的)으로 정보(情報)를 탐색(探索)하는 과학자(科學者)나 기타(其他)사람들에게 효과적(效果的)이었다. 원고(原稿)가 수리(受理)되면 그 원고(原稿)의 저자(著者)들은 그 보문(報文)의 주내용(主內容)에 대하여 적극적(積極的)인 정보전파자(情報傳播者)로서의 역할(役割)을 종종 중지(中止)하는 일이 있는데 이때에는 저자(著者)들의 역할(役割)이 변화(變化)하는 것을 볼 수 있었다. 즉 이 저자(著者)들은 일시적(一時的)이긴 하나 새로운 일을 착수(着手)하기 위하여 정보(情報)의 동화자(同化者)를 찾게 된다. 또한 전(前)에 행한 일에 대한 의견(意見)이나 비평(批評)이 새로운 일에 영향(影響)을 끼치게 된다. 동시(同時)에 새로운 과학정보생산(科學情報生産) 과정(過程)에 들어가게 되고 현재(現在) 진행중(進行中)이거나 최근(最近) 완료(完了)한 연구(硏究)에 대한 정보(情報)를 항상 찾게 된다. 활발(活潑)한 연구(硏究)를 하는 과학자(科學者)들에게는, 동화자(同化者)로서의 역할(役割)과 전파자(傳播者)로서의 역할(役割)을 분리(分離)시킨다는 것은 실제적(實際的)은 못된다. 즉 후자(後者)를 완성(完成)하기 위해서는 전자(前者)를 이용(利用)하게 된다는 것이다. 과학자(科學者)들은 한 단계(段階)에서 한 전파자(傳播者)로서의 역할(役割)이 뚜렷하나 다른 단계(段階)에서는 정보교환(情報交換)이 기본적(基本的)으로 정보동화(情報同化)에 직결(直結)되고 있는 것이다. 정보전파자(情報傳播者)와 정보동화자간(情報同化者間)의 상호관계(相互關係)(또는 정보생산자(情報生産者)와 정보이용자간(情報利用者間))는 과학(科學)에 있어서 하나의 필수양상(必修樣相)이다. 과학(科學)의 유통구조(流通構造)가 전파자(傳播者)(이용자(利用者)로서의 역할(役割)보다는)의 필요성(必要性)에서 볼 때 복잡(複雜)하고 다이나믹한 시스팀으로 구성(構成)된다는 사실(事實)은 과학(科學)의 발전과정(發展過程)에서 필연적(必然的)으로 나타난다. 이와 같은 사실(事實)은 과학정보(科學情報)의 전파요원(傳播要員)이 국가적 회합(國家的 會合)에서 자기연구(自己硏究)에 대한 정보(情報)의 전파기회(傳播機會)를 거절(拒絶)하고 따라서 전파정보(電波情報)를 판단(判斷)하고 선별(選別)하는 것을 감소(減少)시키며 결과적(結果的)으로 잡지(雜誌)나 단행본(單行本)에서 비평(批評)을 하고 추고(推敲)하는 것이 배제(排除)될 때는 유형적(有形的) 과학(科學)은 급속(急速)히 비과학성(非科學性)을 띠게 된다는 것을 Lysenko의 생애(生涯)에 대한 Medvedev의 기술중(記述中)[7]에 지적(指摘)한 것과 관계(關係)되고 있다.

  • PDF