• Title/Summary/Keyword: 변수

Search Result 31,021, Processing Time 0.06 seconds

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

Correlation analysis of radiation therapy position and dose factors for left breast cancer (좌측 유방암의 방사선치료 자세와 선량인자의 상관관계 분석)

  • Jeon, Jaewan;Park, Cheolwoo;Hong, Jongsu;Jin, Seongjin;Kang, Junghun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.29 no.1
    • /
    • pp.37-48
    • /
    • 2017
  • Purpose: The most basic conditions of radiation therapy is to prevent unnecessary exposure of normal tissue. The risk factors that are important o evaluate the dose emitted to the lung and heart from radiation therapy for breast cancer. Therefore, comparing the dose factors of a normal tissue according to the radion treatment position and Seeking an effective radiation treatment for breast cancer through the analysis of the correlation relationship. Materials and Methods: Computed tomography was conducted among 30 patients with left breast cancer in supine and prone position. Eclipse Treatment Planning System (Ver.11) was established by computerized treatment planning. Using the DVH compared the incident dose to normal tissue by position. Based on the result, Using the SPSS (ver.18) analyzed the dose in each normal tissue factors and Through the correlation analysis between variables, independent sample test examined the association. Finally The HI, CI value were compared Using the MIRADA RTx (ver. ad 1.6) in the supine, prone position Results: The results of computerized treatment planning of breast cancer in the supine position were V20, $16.5{\pm}2.6%$ and V30, $13.8{\pm}2.2%$ and Mean dose, $779.1{\pm}135.9cGy$ (absolute value). In the prone position it showed in the order $3.1{\pm}2.2%$, $1.8{\pm}1.7%$, $241.4{\pm}138.3cGy$. The prone position showed overall a lower dose. The average radiation dose 537.7 cGy less was exposured. In the case of heart, it showed that V30, $8.1{\pm}2.6%$ and $5.1{\pm}2.5%$, Mean dose, $594.9{\pm}225.3$ and $408{\pm}183.6cGy$ in the order supine, prone position. Results of statistical analysis, Cronbach's Alpha value of reliability analysis index is 0.563. The results of the correlation analysis between variables, position and dose factors of lung is about 0.89 or more, Which means a high correlation. For the heart, on the other hand it is less correlated to V30 (0.488), mean dose (0.418). Finally The results of independent samples t-test, position and dose factors of lung and heart were significantly higher in both the confidence level of 99 %. Conclusion: Radiation therapy is currently being developed state-of-the-art linear accelerator and a variety of treatment plan technology. The basic premise of the development think normal tissue protection around PTV. Of course, if you treat a breast cancer patient is in the prone position it take a lot of time and reproducibility of set-up problems. Nevertheless, As shown in the experiment results it is possible to reduce the dose to enter the lungs and the heart from the prone position. In conclusion, if a sufficient treatment time in the prone position and place correct confirmation will be more effective when the radiation treatment to patient.

  • PDF

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Validity and Pertinence of Administrative Capital City Proposal (행정수도 건설안의 타당성과 시의성)

  • 김형국
    • Journal of the Korean Geographical Society
    • /
    • v.38 no.2
    • /
    • pp.312-323
    • /
    • 2003
  • This writer absolutely agrees with the government that regional disequilibrium is severe enough to consider moving the administrative capital. Pursuing this course solely to establish a balanced development, however, is not a convincing enough reason. The capital city is directly related to not only the social and economic situation but, much more importantly, to the domestic political situation as well. In the mid-1970s, the proposal by the Third Republic to move the capital city temporarily was based completely on security reasons. At e time, the then opposition leader Kim, Dae-jung said that establishing a safe distance from the demilitarized zone(DMZ) reflected a typically military decision. His view was that retaining the capital city close to the DMZ would show more consideration for the will of the people to defend their own country. In fact, independent Pakistan moved its capital city from Karachi to Islamabad, situated dose to Kashmir the subject of hot territorial dispute with India. It is regrettable that no consideration has been given to the urgent political situation in the Korean peninsula, which is presently enveloped in a dense nuclear fog. As a person requires health to pursue his/her dream, a country must have security to implement a balanced territorial development. According to current urban theories, the fate of a country depends on its major cities. A negligently guarded capital city runs the risk of becoming hostage and bringing ruin to the whole country. In this vein, North Koreas undoubted main target of attack in the armed communist reunification of Korea is Seoul. For the preservation of our state, therefore, it is only right that Seoul must be shielded to prevent becoming hostage to North Korea. The location of the US Armed Forces to the north of the capital city is based on the judgment that defense of Seoul is of absolute importance. At the same time, regardless of their different standpoints, South and North Korea agree that division of the Korean people into two separate countries is abnormal. Reunification, which so far has defied all predictions, may be realized earlier than anyone expects. The day of reunification seems to be the best day for the relocation of the capital city. Building a proper capital city would take at least twenty years, and a capital city cannot be dragged from one place to another. On the day of a free and democratic reunification, a national agreement will be reached naturally to find a nationally symbolic city as in Brazil or Australia. Even if security does not pose a problem, the governments way of thinking would not greatly contribute to the balanced development of the country. The Chungcheon region, which is earmarked as the new location of the capital city, has been the greatest beneficiary of its proximity to the capital region. Not being a disadvantaged region, locating the capital city there would not help alleviate regional disparity. If it is absolutely necessary to find a candidate region at present, considering security, balanced regional development and post-reunification scenario of the future, Cheolwon area located in the middle of the Korean peninsula may be a plausible choice. Even if the transfer of capital is delayed in consideration of the present political conflict between the South and the North Koreas, there is a definite shortcut to realizing a balanced regional development. It can be found not in the geographical dispersal of the central government, but in the decentralization of power to the provinces. If the government has surplus money to build a new symbolic capital city, it is only right that it should improve, for instance, the quality of drinking water which now everyone eschews, and to help the regional subway authority whose chronic deficit state resoled in a recent disastrous accident. And it is proper to time the transfer of capital city to coincide with that of the reunification of Korea whenever Providence intends.

Research on Perfusion CT in Rabbit Brain Tumor Model (토끼 뇌종양 모델에서의 관류 CT 영상에 관한 연구)

  • Ha, Bon-Chul;Kwak, Byung-Kook;Jung, Ji-Sung;Lim, Cheong-Hwan;Jung, Hong-Ryang
    • Journal of radiological science and technology
    • /
    • v.35 no.2
    • /
    • pp.165-172
    • /
    • 2012
  • We investigated the vascular characteristics of tumors and normal tissue using perfusion CT in the rabbit brain tumor model. The VX2 carcinoma concentration of $1{\times}10^7$ cells/ml(0.1ml) was implanted in the brain of nine New Zealand white rabbits (weight: 2.4kg-3.0kg, mean: 2.6kg). The perfusion CT was scanned when the tumors were grown up to 5mm. The tumor volume and perfusion value were quantitatively analyzed by using commercial workstation (advantage windows workstation, AW, version 4.2, GE, USA). The mean volume of implanted tumors was $316{\pm}181mm^3$, and the biggest and smallest volumes of tumor were 497 $mm^3$ and 195 $mm^3$, respectively. All the implanted tumors in rabbits are single-nodular tumors, and intracranial metastasis was not observed. In the perfusion CT, cerebral blood volume (CBV) were $74.40{\pm}9.63$, $16.08{\pm}0.64$, $15.24{\pm}3.23$ ml/100g in the tumor core, ipsilateral normal brain, and contralateral normal brain, respectively ($p{\leqq}0.05$). In the cerebral blood flow (CBF), there were significant differences between the tumor core and both normal brains ($p{\leqq}0.05$), but no significant differences between ipsilateral and contralateral normal brains ($962.91{\pm}75.96$ vs. $357.82{\pm}12.82$ vs. $323.19{\pm}83.24$ ml/100g/min). In the mean transit time (MTT), there were significant differences between the tumor core and both normal brains ($p{\leqq}0.05$), but no significant differences between ipsilateral and contralateral normal brains ($4.37{\pm}0.19$ vs. $3.02{\pm}0.41$ vs. $2.86{\pm}0.22$ sec). In the permeability surface (PS), there were significant differences among the tumor core, ipsilateral and contralateral normal brains ($47.23{\pm}25.45$ vs. $14.54{\pm}1.60$ vs. $6.81{\pm}4.20$ ml/100g/min)($p{\leqq}0.05$). In the time to peak (TTP) were no significant differences among the tumor core, ipsilateral and contralateral normal brains. In the positive enhancement integral (PEI), there were significant differences among the tumor core, ipsilateral and contralateral brains ($61.56{\pm}16.07$ vs. $12.58{\pm}2.61$ vs. $8.26{\pm}5.55$ ml/100g). ($p{\leqq}0.05$). In the maximum slope of increase (MSI), there were significant differences between the tumor core and both normal brain($p{\leqq}0.05$), but no significant differences between ipsilateral and contralateral normal brains ($13.18{\pm}2.81$ vs. $6.99{\pm}1.73$ vs. $6.41{\pm}1.39$ HU/sec). Additionally, in the maximum slope of decrease (MSD), there were significant differences between the tumor core and contralateral normal brain($p{\leqq}0.05$), but no significant differences between the tumor core and ipsilateral normal brain($4.02{\pm}1.37$ vs. $4.66{\pm}0.83$ vs. $6.47{\pm}1.53$ HU/sec). In conclusion, the VX2 tumors were implanted in the rabbit brain successfully, and stereotactic inoculation method make single-nodular type of tumor that was no metastasis in intracranial, suitable for comparative study between tumors and normal tissues. Therefore, perfusion CT would be a useful diagnostic tool capable of reflecting the vascularity of the tumors.

Risk Factor Analysis for Operative Death and Brain Injury after Surgery of Stanford Type A Aortic Dissection (스탠포드 A형 대동맥 박리증 수술 후 수술 사망과 뇌손상의 위험인자 분석)

  • Kim Jae-Hyun;Oh Sam-Sae;Lee Chang-Ha;Baek Man-Jong;Hwang Seong-Wook;Lee Cheul;Lim Hong-Gook;Na Chan-Young
    • Journal of Chest Surgery
    • /
    • v.39 no.4 s.261
    • /
    • pp.289-297
    • /
    • 2006
  • Background: Surgery for Stanford type A aortic dissection shows a high operative mortality rate and frequent postoperative brain injury. This study was designed to find out the risk factors leading to operative mortality and brain injury after surgical repair in patients with type A aortic dissection. Material and Method: One hundred and eleven patients with type A aortic dissection who underwent surgical repair between February, 1995 and January 2005 were reviewed retrospectively. There were 99 acute dissections and 12 chronic dissections. Univariate and multivariate analysis were performed to identify risk factors of operative mortality and brain injury. Resuit: Hospital mortality occurred in 6 patients (5.4%). Permanent neurologic deficit occurred in 8 patients (7.2%) and transient neurologic deficit in 4 (3.6%). Overall 1, 5, 7 year survival rate was 94.4, 86.3, and 81.5%, respectively. Univariate analysis revealed 4 risk factors to be statistically significant as predictors of mortality: previous chronic type III dissection, emergency operation, intimal tear in aortic arch, and deep hypothemic circulatory arrest (DHCA) for more than 45 minutes. Multivariate analysis revealed previous chronic type III aortic dissection (odds ratio (OR) 52.2), and DHCA for more than 45 minutes (OR 12.0) as risk factors of operative mortality. Pathological obesity (OR 12.9) and total arch replacement (OR 8.5) were statistically significant risk factors of brain injury in multivariate analysis. Conclusion: The result of surgical repair for Stanford type A aortic dissection was good when we took into account the mortality rate, the incidence of neurologic injury, and the long-term survival rate. Surgery of type A aortic dissection in patients with a history of chronic type III dissection may increase the risk of operative mortality. Special care should be taken and efforts to reduce the hypothermic circulatory arrest time should alway: be kept in mind. Surgeons who are planning to operate on patients with pathological obesity, or total arch replacement should be seriously consider for there is a higher risk of brain injury.

Comparison of Left Ventricular Volume and Function between 46 Channel Multi-detector Computed Tomography (MDCT) and Echocardiography (16 채널 Multi-detector 컴퓨터 단층촬영과 심초음파를 이용한 좌심실 용적과 기능의 비교)

  • Park, Chan-Beom;Cho, Min-Seob;Moon, Mi-Hyoung;Cho, Eun-Ju;Lee, Bae-Young;Kim, Chi-Kyung;Jin, Ung
    • Journal of Chest Surgery
    • /
    • v.40 no.1 s.270
    • /
    • pp.45-51
    • /
    • 2007
  • Background: Although echocardiography is usually used for quantitative assessment of left ventricular function, the recently developed 16-slice multidetector computed tomography (MDCT) is not only capable of evaluating the coronary arteries but also left ventricular function. Therefore, the objective of our study was to compare the values of left ventricular function quantified by MDCT to those by echocardiography for evaluation of its regards to clinical applications. Material and Method: From 49 patients who underwent MDCT in our hospital from November 1, 2003 to January 31, 2005, we enrolled 20 patients who underwent echocardiography during the same period for this study. Left ventricular end-diastolic volume index (LVEDVI), left ventricular end-systolic volume index (LVESVI), stroke volume index (SVI), left ventricular mass index (LVMI), and ejection fraction (EF) were analyzed. Result: Average LVEDVI ($80.86{\pm}34.69mL$ for MDCT vs $60.23{\pm}29.06mL$ for Echocardiography, p<0.01), average LVESVI ($37.96{\pm}24.52mL$ for MDCT vs $25.68{\pm}16.57mL$ for Echocardiography, p<0.01), average SVI ($42.90{\pm}15.86mL$ for MDCT vs $34.54{\pm}17.94mL$ for Echocardiography, p<0.01), average LVMI ($72.14{\pm}25.35mL$ for MDCT vs $130.35{\pm}53.10mL$ for Echocardiography, p<0.01), and average EF ($55.63{\pm}12.91mL$ for MOCT vs $59.95{\pm}12.75ml$ for Echocardiography, p<0.05) showed significant difference between both groups. Average LVEDVI, average LVESVI, and average SVI were higher in MDCT, and average LVMI and average EF were higher in echocardiogram. Comparing correlation for each parameters between both groups, LVEDVI $(r^2=0.74,\;p<0.0001)$, LVESVI $(r^2=0.69,\;p<0.0001)$ and SVI $(r^2=0.55,\;p<0.0001)$ showed high relevance, LVMI $(r^2=0.84,\;p<0.0001)$ showed very high relevance, and $EF (r^2=0.45,\;p=0.0002)$ showed relatively high relevance. Conclusion: Quantitative assessment of left ventricular volume and function using 16-slice MDCT showed high relevance compared with echocardiography, therefore may be a feasible assessment method. However, because the average of each parameters showed significant difference, the absolute values between both studies may not be appropriate for clinical applications. Furthermore, considering the future development of MDCT, we expect to be able to easily evaluate the assessment of coronary artery stenosis along with left ventricular function in coronary artery disease patients.

Changes of Brain Natriuretic Peptide Levels according to Right Ventricular HemodynaMics after a Pulmonary Resection (폐절제술 후 우심실의 혈역학적 변화에 따른 BNP의 변화)

  • Na, Myung-Hoon;Han, Jong-Hee;Kang, Min-Woong;Yu, Jae-Hyeon;Lim, Seung-Pyung;Lee, Young;Choi, Jae-Sung;Yoon, Seok-Hwa;Choi, Si-Wan
    • Journal of Chest Surgery
    • /
    • v.40 no.9
    • /
    • pp.593-599
    • /
    • 2007
  • Background: The correlation between levels of brain natriuretic peptide (BNP) and the effect of pulmonary resection on the right ventricle of the heart is not yet widely known. This study aims to assess the relationship between the change in hemodynamic values of the right ventricle and increased BNP levels as a compensatory mechanism for right heart failure following pulmonary resection and to evaluate the role of the BNP level as an index of right heart failure after pulmonary resection. Material and Method: In 12 non small cell lung cancer patients that had received a lobectomy or pnemonectomy, the level of NT-proBNP was measured using the immunochemical method (Elecsys $1010^{(R)}$, Roche, Germany) which was compared with hemodynamic variables determined through the use of a Swan-Garz catheter prior to and following the surgery. Echocardiography was performed prior to and following the surgery, to measure changes in right ventricular and left ventricular pressures. For statistical analysis, the Wilcoxon rank sum test and linear regression analysis were conducted using SPSSWIN (version, 11.5). Result: The level of postoperative NT-proBNP (pg/mL) significantly increased for 6 hours, then for 1 day, 2 days, 3 days and 7 days after the surgery (p=0.003, 0.002, 0.002, 0.006, 0.004). Of the hemodynamic variables measured using the Swan-Ganz catheter, the mean pulmonary artery pressure after the surgery when compared with the pressure prior to surgery significantly increased at 0 hours, 6 hours, then 1 day, 2 days, and 3 days after the surgery (p=0.002, 0,002, 0.006, 0.007, 0.008). The right ventricular pressure significantly increased at 0 hours, 6 hours, then 1 day, and 3 days after the surgery (p=0.000, 0.009, 0.044, 0.032). The pulmonary vascular resistance index [pulmonary vascular resistance index=(mean pulmonary artery pressure-mean pulmonary capillary wedge pressure)/cardiac output index] significantly increased at 6 hours, then 2 days after the surgery (p=0.008, 0.028). When a regression analysis was conducted for changes in the mean pulmonary artery pressure and NT-proBNP levels after the surgery, significance was evident after 6 hours (r=0.602, p=0.038) and there was no significance thereafter. Echocardiography displayed no significant changes after the surgery. Conclusion: There was a significant correlation between changes in the mean pulmonary artery pressure and the NT-proBNP level 6 hours after a pulmonary resection. Therefore, it can be concluded that changes in NT-proBNP level after a pulmonary resection can serve as an index that reflects early hemodynamic changes in the right ventricle after a pulmonary resection.

A study on dermatologic diseases of workers exposed to cutting oil (절삭유 취급 근로자의 피부질환에 관한 연구)

  • Chun, Byung-Chul;Kim, Hee-Ok;Kim, Soon-Duck;Oh, Chil-Hwan;Yum, Yong-Tae
    • Journal of Preventive Medicine and Public Health
    • /
    • v.29 no.4 s.55
    • /
    • pp.785-799
    • /
    • 1996
  • We investigated the 1,004 workers who worked in a automobile factory to study the epidemiologic characterists of dermatoses due to cutting oils. Among the workers, 667(66.4%) answered the questionaire. They are belong to 5 departments of the factory-the Engine-Work(258 workers), Gasoline engine Assembly(210), Diesel engine Assembly(96), Power train Work(86), Power train Assembly(17). We measured the oil mist concentration in air of the departments and examined the workers who had dermatologic symptoms. The results were follows; 1) Oil mist concentration ; Of all measured points(52),9 points(17.2%) exeeded $5mg/m^3$- the time-weighed PEL-and one department had a upper confidence limit(95%) higher than $5mg/m^3$. 2) Dermatologists examined 213 workers. 172 of them complained any skin symptoms at that time - itching(32.5%), papule(21.6%), scale(15.7%), vesicle(12.5%) in order. The abnormal skin site found by dermatologist were palm(29.3%), finger & nail(24.6%), forearm(16.2%), back of hand(8.4%) in order. 3) As the result of physical examination, we found that 160 workers had skin diseases. Contact dermatitis was the most common; 69 workers had contact dermatitis alone(43.1%), 11 had contact dermatitis with acne(6.9%), 10 had contact dermatitis with folliculitis(6.3%), 1 had contact dermatitis with acne & folliculitis, and 1 had contact dermatitis with abnormal pigmentation. Others were folliculitis(9 workers, 5.6%), acne(8, 5.0%), folliculitis & acne (2, 1.2%), keratosis(1, 0.6%), abnormal pigmentation (1, 0.6%), and non-specific hand eczema (47, 29.3%). 4) The prevalence of any skin diseases was 34.0 pet 100 in cutting oil users, and 13.3 per 100 in non- users. Especially, the prevalence of contact dermatitis was 23.0 per 100 in cutting oil users and 23.0 per 100 in non-users. 5) We tried patch test(standard serise, oil serise, organic solvents) on 49 patients to differentiate allergic contact dermatitis from irritant contact dermatitis and found 20 were positive. 6) In a multivariate analysis(independant=age, tenure, kinds of cutting oil), the risk of skin diseases was higher in the water-based cutting oil user and both oil user than non-user or neat oil user(odds ratio were 2.16 and 2.78, respectively). And the risk of contact dermatitis was much higher at the same groups(odds ratio were 5.16 and 6.82, respectively).

  • PDF

Purification Characteristics and Hydraulic Conditions in an Artificial Wetland System (인공습지시스템에서 수리학적 조건과 수질정화특성)

  • Park, Byeng-Hyen;Kim, Jae-Ok;Lee, Kwng-Sik;Joo, Gea-Jae;Lee, Sang-Joon;Nam, Gui-Sook
    • Korean Journal of Ecology and Environment
    • /
    • v.35 no.4 s.100
    • /
    • pp.285-294
    • /
    • 2002
  • The purpose of this study was to evaluate the relationships between purification characteristics and hydraulic conditions, and to clarify the basic and essential factors required to be considered in the construction and management of artificial wetland system for the improvement of reservoir water quality. The artificial wetland system was composed of a pumping station and six sequential plants beds with five species of macrophytes: Oenanthe javanica, Acorus calamus, Zizania latifolia, Typha angustifolia, and Phragmites australis. The system was operated on free surface-flow system, and operation conditions were $3,444-4,156\; m^3/d$ of inflow rate, 0.5-2.0 hr of HRT, 0.1-0.2 m of water depth, 6.0-9.4 m/d of hydraulic loading, and relatively low nutrients concentration (0.224-2.462 mgN/L, 0.145-0.164 mgP/L) of inflow water. The mean purification efficiencies of TN ranged from 12.1% to 14.3% by showing the highest efficiency at the Phragmites australis bed, and these of TP were 6.3-9.5% by showing the similar ranges of efficiencies among all species. The mean purification efficiencies of SS and Chl-A ranged from 17.4% to 38.5% and from 12.0% to 20.2%, respectively, and the Oenanthe javanica bed showed the highest efficiency with higher concentration of influent than others. The mean purification amount per day of each pollutant were $9.8-4.1\;g{\cdot}m^{-2}{\cdot}d^{-1}$ in BOD, $1.299-2.343\;g{\cdot}m^{-2}{\cdot}d^{-1}$ in TN, $0.085-1.821\;g{\cdot}m^{-2}{\cdot}d^{-1}$ in TP, $17.9-111.6\;g{\cdot}m^{-2}{\cdot}d^{-1}$ in SS and $0.011-0.094\;g{\cdot}m^{-2}{\cdot}d^{-1}$ in Chl-a. The purification amount per day of TN revealed the hi링hest level at the Zizania latifolia bed, and TP showed at the Acrous calamus bed. SS and Chl-a, as particulate materials, revealed the highest purification amount per day at the Oenanthe javanica bed that was high on the whole parameters. It was estimated that the purification amount per day was increased with the high concentration of influent and shoot density of macrophytes, as was shown in the purification efficiency. Correlation coefficients between purification efficiencies and hydraulic conditions (HRT and inflow rate) were 0.016-0.731 of $R^2$ in terms of HRT, and 0.015-0.868 of $R^2$ daily inflow rate. Correlation coefficients of purification amounts per day with hydraulic conditions were 0.173-0.763 of Ra in terms of HRT, and 0.209-0.770 daily inflow rate. Among the correlation coefficients between purification efficiency and hydraulic condition, the percentages of over 0.5 range of $R^2$ were 20% in HRT and in daily inflow rate. However, the percentages of over 0.5 range of correlation coefficients ($R^2$) between purification amount per day and hydraulic conditions were 53% in HRT and 73% in daily inflow rate. The relationships between purificationamount per day and hydraulic condition were more significant than those of purifi-cation efficiency. In this study, high hydraulic conditions (HRT and inflow rate) are not likely to affect significantly the purification efficiency of nutrient. Therefore, the emphasis should be on the purification amounts per day with high hydraulicloadings (HRT and inflow rate) for the improvement of eutrophic reservoir withrelatively low nutrients concentration and large quantity to be treated.