• Title/Summary/Keyword: Case studies

Search Result 13,344, Processing Time 0.052 seconds

Diagnosis of Obstructive Sleep Apnea Syndrome Using Overnight Oximetry Measurement (혈중산소포화도검사를 이용한 폐쇄성 수면무호흡증의 흡증의 진단)

  • Youn, Tak;Park, Doo-Heum;Choi, Kwang-Ho;Kim, Yong-Sik;Woo, Jong-Inn;Kwon, Jun-Soo;Ha, Kyoo-Seob;Jeong, Do-Un
    • Sleep Medicine and Psychophysiology
    • /
    • v.9 no.1
    • /
    • pp.34-40
    • /
    • 2002
  • Objectives: The gold standard for diagnosing obstructive sleep apnea syndrome (OSAS) is nocturnal polysomnography (NPSG). This is rather expensive and somewhat inconvenient, however, and consequently simpler and cheaper alternatives to NPSG have been proposed. Oximetry is appealing because of its widespread availability and ease of application. In this study, we have evaluated whether oximetry alone can be used to diagnose or screen OSAS. The diagnostic performance of an analysis algorithm using arterial oxygen saturation ($SaO_2$) base on 'dip index', mean of $SaO_2$, and CT90 (the percentage of time spent at $SaO_2$<90%) was compared with that of NPSG. Methods: Fifty-six patients referred for NPSG to the Division of Sleep Studies at Seoul National University Hospital, were randomly selected. For each patient, NPSG with oximetry was carried out. We obtained three variables from the oximetry data such as the dip index most linearly correlated with respiratory disturbance index (RDI) from NPSG, mean $SaO_2$, and CT90 with diagnosis from NPSG. In each case, sensitivity, specificity and positive and negative predictive values of oximetry data were calculated. Results: Thirty-nine patients out of fifty-six patients were diagnosed as OSAS with NPSG. Mean RDI was 17.5, mean $SaO_2$ was 94.9%, and mean CT90 was 5.1%. The dip index [4%-4sec] was most linearly correlated with RDI (r=0.861). With dip index [4%-4sec]${\geq}2$ as diagnostic criteria, we obtained sensitivity of 0.95, specificity of 0.71, positive predictive value of 0.88, and negative predictive value of 0.86. Using mean $SaO_2{\leq}97%$, we obtained sensitivity of 0.95, specificity of 0.41, positive predictive value of 0.79, and negative predictive value of 0.78. Using $CT90{\geq}5%$, we obtained sensitivity of 0.28, specificity of 1.00, positive predictive value of 1.00, and negative predictive value of 0.38. Conclusions: The dip index [4%-4sec] and mean $SaO_2{\leq}97%$ obtained from nocturnal oximetry data are helpful in diagnosis of OSAS. CT90${\leq}$5% can be also used in excluding OSAS.

  • PDF

Application of MicroPACS Using the Open Source (Open Source를 이용한 MicroPACS의 구성과 활용)

  • You, Yeon-Wook;Kim, Yong-Keun;Kim, Yeong-Seok;Won, Woo-Jae;Kim, Tae-Sung;Kim, Seok-Ki
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.51-56
    • /
    • 2009
  • Purpose: Recently, most hospitals are introducing the PACS system and use of the system continues to expand. But small-scaled PACS called MicroPACS has already been in use through open source programs. The aim of this study is to prove utility of operating a MicroPACS, as a substitute back-up device for conventional storage media like CDs and DVDs, in addition to the full-PACS already in use. This study contains the way of setting up a MicroPACS with open source programs and assessment of its storage capability, stability, compatibility and performance of operations such as "retrieve", "query". Materials and Methods: 1. To start with, we searched open source software to correspond with the following standards to establish MicroPACS, (1) It must be available in Windows Operating System. (2) It must be free ware. (3) It must be compatible with PET/CT scanner. (4) It must be easy to use. (5) It must not be limited of storage capacity. (6) It must have DICOM supporting. 2. (1) To evaluate availability of data storage, we compared the time spent to back up data in the open source software with the optical discs (CDs and DVD-RAMs), and we also compared the time needed to retrieve data with the system and with optical discs respectively. (2) To estimate work efficiency, we measured the time spent to find data in CDs, DVD-RAMs and MicroPACS. 7 technologists participated in this study. 3. In order to evaluate stability of the software, we examined whether there is a data loss during the system is maintained for a year. Comparison object; How many errors occurred in randomly selected data of 500 CDs. Result: 1. We chose the Conquest DICOM Server among 11 open source software used MySQL as a database management system. 2. (1) Comparison of back up and retrieval time (min) showed the result of the following: DVD-RAM (5.13,2.26)/Conquest DICOM Server (1.49,1.19) by GE DSTE (p<0.001), CD (6.12,3.61)/Conquest (0.82,2.23) by GE DLS (p<0.001), CD (5.88,3.25)/Conquest (1.05,2.06) by SIEMENS. (2) The wasted time (sec) to find some data is as follows: CD ($156{\pm}46$), DVD-RAM ($115{\pm}21$) and Conquest DICOM Server ($13{\pm}6$). 3. There was no data loss (0%) for a year and it was stored 12741 PET/CT studies in 1.81 TB memory. In case of CDs, On the other hand, 14 errors among 500 CDs (2.8%) is generated. Conclusions: We found that MicroPACS could be set up with the open source software and its performance was excellent. The system built with open source proved more efficient and more robust than back-up process using CDs or DVD-RAMs. We believe that the operation of the MicroPACS would be effective data storage device as long as its operators develop and systematize it.

  • PDF

The hydrodynamic characteristics of the canvas kite - 2. The characteristics of the triangular canvas kite - (캔버스 카이트의 유체역학적 특성에 관한 연구 - 2. 삼각형 캔버스 카이트의 특성 -)

  • Bae, Bong-Seong;Bae, Jae-Hyun;An, Heui-Chun;Lee, Ju-Hee;Shin, Jung-Wook
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.40 no.3
    • /
    • pp.206-213
    • /
    • 2004
  • As far as an opening device of fishing gears is concerned, applications of a kite are under development around the world. The typical examples are found in the opening device of the stow net on anchor and the buoyancy material of the trawl. While the stow net on anchor has proved its capability for the past 20 years, the trawl has not been wildly used since it has been first introduced for the commercial use only without sufficient studies and thus has revealed many drawbacks. Therefore, the fundamental hydrodynamics of the kite itself need to ne studied further. Models of plate and canvas kite were deployed in the circulating water tank for the mechanical test. For this situation lift and drag tests were performed considering a change in the shape of objects, which resulted in a different aspect ratio of rectangle and trapezoid. The results obtained from the above approaches are summarized as follows, where aspect ratio, attack angle, lift coefficient and maximum lift coefficient are denoted as A, B, $C_L$ and $C_{Lmax}$ respectively : 1. Given the triangular plate, $C_{Lmax}$ was produced as 1.26${\sim}$1.32 with A${\leq}$1 and 38$^{\circ}$B${\leq}$42$^{\circ}$. And when A${\geq}$1.5 and 20$^{\circ}$${\leq}$B${\leq}$50$^{\circ}$, $C_L$ was around 0.85. Given the inverted triangular plate, $C_{Lmax}$ was 1.46${\sim}$1.56 with A${\leq}$1 and 36$^{\circ}$B${\leq}$38$^{\circ}$. And When A${\geq}$1.5 and 22$^{\circ}$B${\leq}$26$^{\circ}$, $C_{Lmax}$ was 1.05${\sim}$1.21. Given the triangular kite, $C_{Lmax}$ was produced as 1.67${\sim}$1.77 with A${\leq}$1 and 46$^{\circ}$B${\leq}$48$^{\circ}$. And when A${\geq}$1.5 and 20$^{\circ}$B${\leq}$50$^{\circ}$, $C_L$ was around 1.10. Given the inverted triangular kite, $C_{Lmax}$ was 1.44${\sim}$1.68 with A${\leq}$1 and 28$^{\circ}$B${\leq}$32$^{\circ}$. And when A${\geq}$1.5 and 18$^{\circ}$B${\leq}$24$^{\circ}$, $C_{Lmax}$ was 1.03${\sim}$1.18. 2. For a model with A=1/2, an increase in B caused an increase in $C_L$ until $C_L$ has reached the maximum. Then there was a tendency of a very gradual decrease or no change in the value of $C_L$. For a model with A=2/3, the tendency of $C_L$ was similar to the case of a model with A=1/2. For a model with A=1, an increase in B caused an increase in $C_L$ until $C_L$ has reached the maximum. And the tendency of $C_L$ didn't change dramatically. For a model with A=1.5, the tendency of $C_L$ as a function of B was changed very small as 0.75${\sim}$1.22 with 20$^{\circ}$B${\leq}$50$^{\circ}$. For a model with A=2, the tendency of $C_L$ as a function of B was almost the same in the triangular model. There was no considerable change in the models with 20$^{\circ}$B${\leq}$50$^{\circ}$. 3. The inverted model's $C_L$ as a function of increase of B reached the maximum rapidly, then decreased gradually compared to the non-inverted models. Others were decreased dramatically. 4. The action point of dynamic pressure in accordance with the attack angle was close to the rear area of the model with small attack angle, and with large attack angle, the action point was close to the front part of the model. 5. There was camber vertex in the position in which the fluid pressure was generated, and the triangular canvas had large value of camber vertex when the aspect ratio was high, while the inverted triangular canvas was versa. 6. All canvas kite had larger camber ratio when the aspect ratio was high, and the triangular canvas had larger one when the attack angle was high, while the inverted triangluar canvas was versa.

Multimodality Treatement in Patients with Clinical Stage IIIA NSCLC (임상적 IIIA병기 비소세포폐암의 다각적 치료의 효과)

  • Lee, Yun Seun;Jang, Pil Soon;kang, Hyun Mo;Lee, Jeung Eyun;Kwon, Sun Jung;An, Jin Yong;Jung, Sung Soo;Kim, Ju Ock;Kim, Sun Young
    • Tuberculosis and Respiratory Diseases
    • /
    • v.57 no.6
    • /
    • pp.557-566
    • /
    • 2004
  • Background : To find out effectiveness of multimodality treatments based on induction chemotherapy(CTx) in patients with clinical stage IIIA NSCLC Methods : From 1997 to 2002, 74 patients with clinical stage IIIA NSCLC underwent induction CTx at the hospital of Chungnam National University. Induction CTx included above two cycles of cisplatin-based regimens(ectoposide, gemcitabine, vinorelbine, or taxol) followed by tumor evaluation. In 30 complete resection group, additional 4500-5000cGy radiotherapy(RTx) was delivered in 15 patients with pathologic nodal metastasis. 29 out of 44 patients who were unresectable disease, refusal of operation, and incomplete resection were followed by 60-70Gy RTx in local treatment. Additional 1-3 cycle CTx were done in case of induction CTx responders in both local treatment groups. Results : Induction CTx response rate were 44.6%(complete remission 1.4% & partial response 43.2%) and there was no difference of response rate by regimens(p=0.506). After induction chemotherapy, only 33 out of resectable 55 ones(including initial resectable 37 patients) were performed by surgical treatment because of 13 refusal of surgery by themselves and 9 poor predicted reserve lung function. There were 30(40.5%) patients with complete resection, 2(2.6%) persons with incomplete resection, and 1(1.3%) person with open & closure. Response rate in 27 ones with chest RTx out of non-operation group was 4.8% CR and 11.9% PR. In complete resection group, relapse free interval was 13.6 months and 2 year recur rate was 52%. In non-complete resection(incomplete resection or non-operation) group, disease progression free interval was 11.2 months and 2 year disease progression rate was 66.7%. Median survival time of induction CTx 74 patients with IIIA NSCLC was 25.1months. When compared complete resection group with non-complete resection group, the median survival time was 31.7 and 23.4months(p=0.024) and the 2-year overall survival rate was 80% and 41%. In the complete resection group, adjuvant postoperative RTx subgroup significantly improved the 2-year local control rate(0% vs. 40%, p= 0.007) but did not significantly improve overall survival(32.2months vs. 34.9months, p=0.48). Conculusion : Induction CTx is a possible method in the multimodality treatments, especially followed by complete resection, but overall survival by any local treatment(surgical resection or RTx) was low. Additional studies should be needed to analysis data for appropriate patient selection, new chemotherapy regimens and the time when should RTx be initiated.

Granulocytic Sarcoma(Chloroma) in Leukemic Patients (백혈병 환자의 과립구 육종(녹색종양))

  • Rhee, Seung-Koo;Kang, Yong-Ku;Bahk, Won-Jong;Jung, Yang-Kuk;Lee, Sang-Wook;Jeong, Ji-Ho
    • The Journal of the Korean bone and joint tumor society
    • /
    • v.11 no.1
    • /
    • pp.54-61
    • /
    • 2005
  • Purpose: The granulocytic sarcoma which developed in leukemic patients are quite rare and it will have bad prognosis, but it's tumor pathogenesis and also their treatment are not yet established. Through this study we have tried to know their clinical course, prognosis and their end result of recent treatment. Material and Methods: Total 20 patients of granulocytic sarcoma which were developed in total 2,197 leukemic patients from April, 1998 to September, 2004 were treated at the leukemic center and the orthopaedic department of St. Mary's hospital, Catholic University of Korea, and followed them for 1~78 months(average 18 months). Results: Total 20 cases of granulocytic sarcoma was found in 14 cases of total 1,331 acute myelocytic leukemic patients(AML), 4 cases of total 744 of chronic myelocytic leukemic patients(CML), and only one case in total 122 of acute biphenotype of leukemia. And so their occurrence rate in leukmic patients are actually 0.91%, total 20 cases of granulocytic sarcoma in total 2,197 leukemic patients at same period. Their ages are average 28.3 years(4~52 years), and male are predominant(13 cases) than female(7 cases). Single involvement was found in 11 cases but multiple lesions are in 9 cases, and spine, brain, extremities, chest, and pelvic bone are involved in frequency. The granulocytic sarcoma was developed in various stages of the leukemia, ie, 8 cases in complete remission of leukemia, and 12 cases in the treatment process of AML. The pathohistologic evaluation of granulocytic sarcoma was done in 6 cases which was developed in their extremities, and confirmed numerous immature myeloblasts and lymphocytes mixed. The treatment of these granulocytic sarcoma was mainly limited for the treatment of leukemia by Glivac and massive steroid therapy(19cases) and also combined with the bone marrow transplantation(13 cases), but radiation therapy with average 3,500 rads in 15 cases out of total 20 sarcomas was also done, and followed them for average 17.5 months after development of granulocytic sarcomas. Finally their prognosis was so bad that 12 patients(60%) out of total 20 granulocytic sarcoma were dead in 6.5 months after sarcoma developed and we found the granulocytic sarcoma was more fatal if they are developed during the process of CML(mortality: 100%(4/4cases). Conclusion: The prognosis of granulocytic sarcomas in leukemic patients are quite fatal, and much more studies for their pathogenesis and ways of treatment should be performed continuously.

  • PDF

Quantitative Assessment Technology of Small Animal Myocardial Infarction PET Image Using Gaussian Mixture Model (다중가우시안혼합모델을 이용한 소동물 심근경색 PET 영상의 정량적 평가 기술)

  • Woo, Sang-Keun;Lee, Yong-Jin;Lee, Won-Ho;Kim, Min-Hwan;Park, Ji-Ae;Kim, Jin-Su;Kim, Jong-Guk;Kang, Joo-Hyun;Ji, Young-Hoon;Choi, Chang-Woon;Lim, Sang-Moo;Kim, Kyeong-Min
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.42-51
    • /
    • 2011
  • Nuclear medicine images (SPECT, PET) were widely used tool for assessment of myocardial viability and perfusion. However it had difficult to define accurate myocardial infarct region. The purpose of this study was to investigate methodological approach for automatic measurement of rat myocardial infarct size using polar map with adaptive threshold. Rat myocardial infarction model was induced by ligation of the left circumflex artery. PET images were obtained after intravenous injection of 37 MBq $^{18}F$-FDG. After 60 min uptake, each animal was scanned for 20 min with ECG gating. PET data were reconstructed using ordered subset expectation maximization (OSEM) 2D. To automatically make the myocardial contour and generate polar map, we used QGS software (Cedars-Sinai Medical Center). The reference infarct size was defined by infarction area percentage of the total left myocardium using TTC staining. We used three threshold methods (predefined threshold, Otsu and Multi Gaussian mixture model; MGMM). Predefined threshold method was commonly used in other studies. We applied threshold value form 10% to 90% in step of 10%. Otsu algorithm calculated threshold with the maximum between class variance. MGMM method estimated the distribution of image intensity using multiple Gaussian mixture models (MGMM2, ${\cdots}$ MGMM5) and calculated adaptive threshold. The infarct size in polar map was calculated as the percentage of lower threshold area in polar map from the total polar map area. The measured infarct size using different threshold methods was evaluated by comparison with reference infarct size. The mean difference between with polar map defect size by predefined thresholds (20%, 30%, and 40%) and reference infarct size were $7.04{\pm}3.44%$, $3.87{\pm}2.09%$ and $2.15{\pm}2.07%$, respectively. Otsu verse reference infarct size was $3.56{\pm}4.16%$. MGMM methods verse reference infarct size was $2.29{\pm}1.94%$. The predefined threshold (30%) showed the smallest mean difference with reference infarct size. However, MGMM was more accurate than predefined threshold in under 10% reference infarct size case (MGMM: 0.006%, predefined threshold: 0.59%). In this study, we was to evaluate myocardial infarct size in polar map using multiple Gaussian mixture model. MGMM method was provide adaptive threshold in each subject and will be a useful for automatic measurement of infarct size.

Studies on the Factors Affecting Barley Injury Caused by Herbicides in Drained Paddy Field (제초제에 의한 답리작맥 약해발생 요인구명에 관한 연구)

  • Whan-Seung Ryang
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.14
    • /
    • pp.147-157
    • /
    • 1973
  • I. The effect of excessive soil moisture(at the time of germination) on germination of barley and crop damage of herbicides was investigated. Machete(Butachlor) and TOK(Nitrofen) were treated, respectively, at the rate of 150g ai/10a on each pot whose different soil moisture content was controlled by suppling 30, 40, 50 and 60ml of water per 100gr of air-dried soil, respectively. The results are summarized as follows: 1. Excessive soil moisture beyond field moisture capacity caused great inhibition, from 20 to 100%, of the germination of barley even at untreated pots(check pots). Also, further development of root and growth of barley were greatly inhibited even though the seeds germinated. 2. The same tendency in inhibition of germination and growth as at untreated pots was observed at treated pots, too. As a whole, however, the damage were heavier at treated pots. II. Wanju naked spring barley was seeded on four different soils and covered with soil to a depth of 1 em, and then Machete, TOK, Saturn and HE-314 were treated at the rate of 180, 150 and 200, 150, and 250g ai/10a, respectively, and the effect of soil texture on crop damage of the herbicides was investigated. The results are summarized as follows: 1. Machete(emulsion and granule, at 180g ai/10a) The degree of crop damage was quite different from one soil texture to another: while almost no crop damage was observed on a clay loam soil regardless of the type of formulation, the damage became heavier as the soil texture became sandier as sandy clay loam, volcanic ash loam and sandy loam, and great inhibition of growth was observed on sandy loam soil. In general heavier damage was caused by the application of emulsion than by granular formulation. 2. TOK(Wettable powder, at 150, 250g ai/l0a) Almost the same tendency as in the application of Machete was observed, and the damage became heavier as the application rate increased. 3. Saturn(at l50g ai/l0a) No great difference in crop damage among soil textures was observed. 4. HE-3l4(at 250g ai/l0a) Almost no difference in crop damage among soil textures was observed at this rate of 250g ai/l0a. III. To study a difference of crop damage on soil covering depth(4 levels), 9 herbicides(TOK, MO, HE-3l4, Machete, Saturn, Simetryne, Simazine, Gesaran, Lorox) were treated on the pots with two different soils, and the effect of soil covering depth on crop damage of the herbicides was investigated. The results obtained in this experiment are summarized as follows: Light Clay Soil 1. The growth of barley in relation to depth of soil covering at check pots followed the order vigorous to weak; lcm>1.5cm>0.5cm>0cm. And in case of 0 and 0.5cm covering the growth of barley was very poor. 2. The damage at 0 and 0.5cm covering at treated pots was very severe, but Saturn, Machete, MO and TOK at 100 to l50g ai/l0a, respectively and He-3l4 at 250 to 375g ai/l0a were relatively safe to barley at the depths of lcm and above. 3. Simazine, Lorox and Simetryne caused slight damage even at 1.5cm covering. Sandy Loam Soil The growth of barley in relation to depth of soil covering at untreated pots followed the order, from vigorous to weak; 1.5cm 0.5cm 3cm 5cm. While MO was safe to barley at 1.5cm covering, for other chemicals more than 3cm covering was require for safe use. Machete and Saturn at 100g ai/l0a, and HE-3l4 at 250g ai/l0a was relatively safe at more than 3cm covering. Simazine, Lorox, Simetryne and Gesaran were unsafe on sandy soil regardless of covering depth.

  • PDF

Studies on Lipids in Fresh-Water Fishes 7. Comparison of Lipid Components among Wild and Cultured Eel (Anguilla japonica), and Conger Eel (Astroconger myriaster) (담수어의 지질에 관한 연구 7. 천연 및 양식 뱀장어와 붕장어의 지질성분 비교)

  • CHOI Jin-Ho;RHIM Chae-Hwan;BAE Tae-Jin;BYUN Dae-Seok;YOON Tai-Heon
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.18 no.5
    • /
    • pp.439-446
    • /
    • 1985
  • This study was designed to compare the lipid components among wild and cultured eel, Anguilla japonica, and conger eel, Astroconger myriaster. The lipid components of cultured eel were analyzed and compared with those of wild and conger eel. In the content of total lipid, the lipid content in cultured eel was slightly higher than that in wild one, but 2 times higher than that in conger eel. The lipid contents in edible portion of wild and cultured eel were 5 times higher than those in viscera, but the lipid content in edible portion of conger eel showed a similar trend to that in viscera. In the fatty acid composition of neutral lipid in edible portion, percentages of $C_{14:0},\;C_{16:0}\;and\;C_{18:1}$ in cultured eel were higher than those in wild one, while percentages of $C_{16:1},\;C_{18:2},\;C_{18:3},\;C_{20:4},\;C_{20:5},\;C_{22:5}\;and\;C_{22:6}$ lower, and percentages of $C_{18:0},\;C_{20:4}\;and\;C_{22:6}$ in conger eel were noticeably higher than those in wild and cultured eels. In the case of phospholipid in edible portion, percentages of $C_{18:0}\;and\;C_{18:2}$ in cultured eel were higher than those in wild one, while percentages of $C_{16:0},\;C_{16:1},\;C_{18:1},\;C_{18:3},\;C_{20:4},\;C_{20:5},\;C_{22:5}\;and\;C_{22:6}$ lower. The unsaturation (TUFA/TSFA) of neutral lipid was no significant difference among wild and cultured eel, and conger eel, but that of phospholipid in wild eel was higher than that in cultured eel and conger eel. The essential fatty acid content(TEFA) of neutral lipid in edible portion of wild eel was 3 times higher than that of cultured one. but the TEFA of phospholipid in edible portion was no significant difference among wild and cultured eels, and conger eel. The w3 highly unsaturated fatty acid content (w3 HUFA) of neutral lipid in edible portion of wild eel was 2.0 to 2.5 times higher than that of cultured eel and conger eel, but the w3 HUFA of phospholipid in edible portion of wild eel was noticeably higher than that of cultured eel and conger eel. In the ratio (A/B) of fatty acid content (A) in cultured eel to that (B) in diet, the A/B ratios of $C_{18:2}\;w6,\;C_{18:3}\;w3,\;C_{20:5}\;w3\;and\;C_{22:6}\;w3$ were 0.23 to 0.48 much lower than the other fatty acid. Consequently, it is considered that the ratios of w3 HUFA is related to the biosynthesis of polyenoic acid and growth rate of cultured eel.

  • PDF

Difference in Chemical Composition of PM2.5 and Investigation of its Causing Factors between 2013 and 2015 in Air Pollution Intensive Monitoring Stations (대기오염집중측정소별 2013~2015년 사이의 PM2.5 화학적 특성 차이 및 유발인자 조사)

  • Yu, Geun Hye;Park, Seung Shik;Ghim, Young Sung;Shin, Hye Jung;Lim, Cheol Soo;Ban, Soo Jin;Yu, Jeong Ah;Kang, Hyun Jung;Seo, Young Kyo;Kang, Kyeong Sik;Jo, Mi Ra;Jung, Sun A;Lee, Min Hee;Hwang, Tae Kyung;Kang, Byung Chul;Kim, Hyo Sun
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.34 no.1
    • /
    • pp.16-37
    • /
    • 2018
  • In this study, difference in chemical composition of $PM_{2.5}$ observed between the year 2013 and 2015 at six air quality intensive monitoring stations (Bangryenogdo (BR), Seoul (SL), Daejeon (DJ), Gwangju (GJ), Ulsan (US), and Jeju (JJ)) was investigated and the possible factors causing their difference were also discussed. $PM_{2.5}$, organic and elemental carbon (OC and EC), and water-soluble ionic species concentrations were observed on a hourly basis in the six stations. The difference in chemical composition by regions was examined based on emissions of gaseous criteria pollutants (CO, $SO_2$, and $NO_2$), meteorological parameters (wind speed, temperature, and relative humidity), and origins and transport pathways of air masses. For the years 2013 and 2014, annual average $PM_{2.5}$ was in the order of SL ($${\sim_=}DJ$$)>GJ>BR>US>JJ, but the highest concentration in 2015 was found at DJ, following by GJ ($${\sim_=}SJ$$)>BR>US>JJ. Similar patterns were found in $SO{_4}^{2-}$, $NO_3{^-}$, and $NH_4{^+}$. Lower $PM_{2.5}$ at SL than at DJ and GJ was resulted from low concentrations of secondary ionic species. Annual average concentrations of OC and EC by regions had no big difference among the years, but their patterns were distinct from the $PM_{2.5}$, $SO{_4}^{2-}$, $NO_3{^-}$, and $NH_4{^+}$ concentrations by regions. 4-day air mass backward trajectory calculations indicated that in the event of daily average $PM_{2.5}$ exceeding the monthly average values, >70% of the air masses reaching the all stations were coming from northeastern Chinese polluted regions, indicating the long-range transportation (LTP) was an important contributor to $PM_{2.5}$ and its chemical composition at the stations. Lower concentrations of secondary ionic species and $PM_{2.5}$ at SL in 2015 than those at DJ and GJ sites were due to the decrease in impact by LTP from polluted Chinese regions, rather than the difference in local emissions of criteria gas pollutants ($SO_2$, $NO_2$, and $NH_3$) among the SL, DJ, and GJ sites. The difference in annual average $SO{_4}^{2-}$ by regions was resulted from combination of the difference in local $SO_2$ emissions and chemical conversion of $SO_2$ to $SO{_4}^{2-}$, and LTP from China. However, the $SO{_4}^{2-}$ at the sites were more influenced by LTP than the formation by chemical transformation of locally emitted $SO_2$. The $NO_3{^-}$ increase was closely associated with the increase in local emissions of nitrogen oxides at four urban sites except for the BR and JJ, as well as the LTP with a small contribution. Among the meterological parameters (wind speed, temperature, and relative humidity), the ambient temperature was most important factor to control the variation of $PM_{2.5}$ and its major chemical components concentrations. In other words, as the average temperature increases, the $PM_{2.5}$, OC, EC, and $NO_3{^-}$ concentrations showed a decreasing tendency, especially with a prominent feature in $NO_3{^-}$. Results from a case study that examined the $PM_{2.5}$ and its major chemical data observed between February 19 and March 2, 2014 at the all stations suggest that ambient $SO{_4}^{2-}$ and $NO_3{^-}$ concentrations are not necessarily proportional to the concentrations of their precursor emissions because the rates at which they form and their gas/particle partitioning may be controlled by factors (e.g., long range transportation) other than the concentration of the precursor gases.

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.