• Title/Summary/Keyword: SIMPLE method

Search Result 13,724, Processing Time 0.052 seconds

Development of Conformal Radiotherapy with Respiratory Gate Device (호흡주기에 따른 방사선입체조형치료법의 개발)

  • Chu Sung Sil;Cho Kwang Hwan;Lee Chang Geol;Suh Chang Ok
    • Radiation Oncology Journal
    • /
    • v.20 no.1
    • /
    • pp.41-52
    • /
    • 2002
  • Purpose : 3D conformal radiotherapy, the optimum dose delivered to the tumor and provided the risk of normal tissue unless marginal miss, was restricted by organ motion. For tumors in the thorax and abdomen, the planning target volume (PTV) is decided including the margin for movement of tumor volumes during treatment due to patients breathing. We designed the respiratory gating radiotherapy device (RGRD) for using during CT simulation, dose planning and beam delivery at identical breathing period conditions. Using RGRD, reducing the treatment margin for organ (thorax or abdomen) motion due to breathing and improve dose distribution for 3D conformal radiotherapy. Materials and Methods : The internal organ motion data for lung cancer patients were obtained by examining the diaphragm in the supine position to find the position dependency. We made a respiratory gating radiotherapy device (RGRD) that is composed of a strip band, drug sensor, micro switch, and a connected on-off switch in a LINAC control box. During same breathing period by RGRD, spiral CT scan, virtual simulation, and 3D dose planing for lung cancer patients were peformed, without an extended PTV margin for free breathing, and then the dose was delivered at the same positions. We calculated effective volumes and normal tissue complication probabilities (NTCP) using dose volume histograms for normal lung, and analyzed changes in doses associated with selected NTCP levels and tumor control probabilities (TCP) at these new dose levels. The effects of 3D conformal radiotherapy by RGRD were evaluated with DVH (Dose Volume Histogram), TCP, NTCP and dose statistics. Results : The average movement of a diaphragm was 1.5 cm in the supine position when patients breathed freely. Depending on the location of the tumor, the magnitude of the PTV margin needs to be extended from 1 cm to 3 cm, which can greatly increase normal tissue irradiation, and hence, results in increase of the normal tissue complications probabiliy. Simple and precise RGRD is very easy to setup on patients and is sensitive to length variation (+2 mm), it also delivers on-off information to patients and the LINAC machine. We evaluated the treatment plans of patients who had received conformal partial organ lung irradiation for the treatment of thorax malignancies. Using RGRD, the PTV margin by free breathing can be reduced about 2 cm for moving organs by breathing. TCP values are almost the same values $(4\~5\%\;increased)$ for lung cancer regardless of increasing the PTV margin to 2.0 cm but NTCP values are rapidly increased $(50\~70\%\;increased)$ for upon extending PTV margins by 2.0 cm. Conclusion : Internal organ motion due to breathing can be reduced effectively using our simple RGRD. This method can be used in clinical treatments to reduce organ motion induced margin, thereby reducing normal tissue irradiation. Using treatment planning software, the dose to normal tissues was analyzed by comparing dose statistics with and without RGRD. Potential benefits of radiotherapy derived from reduction or elimination of planning target volume (PTV) margins associated with patient breathing through the evaluation of the lung cancer patients treated with 3D conformal radiotherapy.

The Effect of Glucocorticoid on the Change of Nitric Oxide and Cytokine Levels in Induced Sputum from Patients with Bronchial Asthma (기관지 천식 환자에서 부신피질 스테로이드 투여 전후 유도객담내 Nitric Oxide 및 Cytokine의 변화)

  • Kim, Tae-Yon;Yoon, Hyeong-Kyu;Choi, Young-Mee;Lee, Sook-Young;Kwon, Soon-Seog;Kim, Young-Kyoon;Kim, Kwan-Hyoung;Moon, Hwa-Sik;Park, Sung-Hak;Song, Jeong-Sup
    • Tuberculosis and Respiratory Diseases
    • /
    • v.48 no.6
    • /
    • pp.922-931
    • /
    • 2000
  • Background : It has been well known that bronchia1 asthma is a chronic airway inflammatory disorder. Recently, sputum induced with hypertonic saline was introduced as a simple and useful nonivasive medium to investigate airway inflammation and symptom severity in patients with asthma. We examined the eosinophil, eosinophil cationic protein (ECP), interleukin(IL)-3, IL-5, granulocyte-macrophage colony-stimulating facta (GM-CSF), and nitric oxide (NO) derivatives in induced sputum from patients with bronchia1 asthma in order to determine the role of NO and various inflammatory cytokines as a useful markers of airway inflammation or changes in pulmonary function tests and symptoms. Methods : A total 30 patients with bronchia1 asthma received oral prednisolone 30 mg daily for 2 weeks. Forced expiratory volume in one second ($FEV_1$), total blood eosinophil count and induced sputum eosinophil count, ECP, IL-3, IL-5, GM-CSF, and NO derivatives were determined before and after the administration of prednisolone. Results : Of the 30 patients, 13 (43.3%) were male and 17 (56.7%) were female. The mean age of patients was 41.8 years (range 19-64 years). Two patients could not produce sputum at the second study and 3 could not be followed up after their first visit. Two weeks after the prednisolone administration, there was a significant increase in $FEV_1$ (% of predicted value) from 78.1$\pm$20.6 % to 90.3$\pm$ 18.3 % (P<0.001). The eosinophil percentages in induced sputum were significantly decreased after treatment with prednisolone, with values of 56.1$\pm$27.2 % versus 29.6$\pm$21.3 % (P<0.001), and ECP were $134.5\pm68.1\;{\mu}g/L$ versus $41.5\pm42.4\;{\mu}g/L$ (P<0.001) respectively. After the prednisolone treatments, the eotaxin concentration also showed a decreasing tendency from 26.7$\pm$12.8 pg/ml to 21.7$\pm$8.7 pg/ml. There was a decreasing tendency but no significant differences in total blood eosinophil count (425.7$\pm$265.9 vs 287.7$\pm$294.7) and in the concentration of NO derivatives ($70.4\pm44.6{\mu}mol/L$ vs $91.5\pm48.3\;{\mu}mol/L$) after the prednisolone treatments. IL-3, IL-5, GM-CSF were undetectable in the sputum of most subjects either before the prednisolone treatments or after the treatments. Before the prednisolone treatments, a significant inverse correlation was observed between FEV1 and sputum ECP (r=-D.364, P<0.05) and there was a significant correlation between sputum eosinophils and eotaxin (r=0.369, P<0.05) Conclusion : The eotaxin and ECP concentration in induced sputum may be used as markers of airway inflammation after treatments in bronchia1 asthma. In addition, the measurement of sputum eosinophil percent ages is believed to be a simple method displaying the degree of airway inflammation and airway obstruction before and after the prednisolone treatment in bronchia1 asthma. However, unlike exhaled NO, the examination of NO derivatives with Griess reaction in induced sputum is considered an ineffective marker of changing airway inflammation and obstructing symptoms.

  • PDF

A Morphological Study of Bamboos by Vascular Bundle Sheath (대나무류(類)의 유관속초(維管束鞘)에 의(依)한 형태학적(形態學的) 연구(硏究))

  • Kim, Jai Saing
    • Journal of Korean Society of Forest Science
    • /
    • v.25 no.1
    • /
    • pp.13-47
    • /
    • 1975
  • Among the many species of bamboo, it is well known that the dwarf-type is widely distributed in the tropical regions, and the slender type in temperated zone. In the temperated zone the trees have extensively differentiated into one hundred species in 50 genera. In many oriental countries, the bamboo wood is being used as a material for construction and for the manufacture of technical instruments. The bamboo shoot is also regarded as a good and delicious edible resource. Moreover, recent medical investigation verifies that the sap of certain species of the bamboo is an antibiotic effect against cancer. Fortunately, it is very easy to propagate the bamboo trees by using cutting from southeastern Asian countries. This important resource can further be used as a significant source of pulp, which is becoming increasingly important. The classification system of this significant resource has not been completely established to date, even though its importance has been emphasized. Initiated by Canlevon Linne in the 18th century, a classification method concerning the morphological characteristics of flowers was the first step in developing a classification. But it was not an easy task to accomplish, because this type of classification system is based on the sexual organs in bamboo trees. Because the bamboo has a long life cycle of 60-120 years and classification according to this method was very difficult as the materials for the classification are not abundant and some species have changed, even though many references related to the morphological classification of bamboo trees are available nowadays. So, the certification of bamboo trees according to the morphological classification system is not reasonable for us. Consequently, the classification system of bamboo trees on the basis of endomorphological characteristics was initiated by Chinese-born Liese. And classification method based on the morphological characteristics of the vascular bundle was developed by Grosser. These classification methods are fundamentally related to Holltum's classification method, which stressed the morphology of the ovary. The author investigated to re-establish a new classification method based on the vascular sheath. Twenty-six species in 11 genera which originated from Formosa where used in the study. The results obtained from the investigation were somewhat coordinated with those of Crosser. Many difficulties were found in distinguishing the species of Bambusa and Dendrocalamus. These two species were critically differentiated under the new classification system, which is based on the existence of a separated vascular bundle sheath in the bamboo. According to these results, it is recommended that Babusa divided into two groups by placing it into either subspecies or the lower categories. This recommendation is supported by the observation that the evolutional pattern of the bamboo thunk which is from outward to inward. It is also supported by the viewpoint that the fundamental hypothesis in evolution is from simple to complex. There remained many problems to be solved through more critical examination by comparing the results to those of the classification based on the sexual organs method. The author observed the figure of the cross-sectional area of vascular trunk of bamboo tree and compared the results with those of Grosser and Liese, i.e. A, $B_1$, $B_2$, C, and D groups in classification. Group A and $B_2$ were in accordance with the results of those scholars, while group D showed many differences, Grosser and Liese divided bamboo into "g" type and "h" type according to the vascular bundle type; and they included Dendrocalamus and Bambusa in Group D without considering the type of vascular bundle sheath. However, the results obtained by the author showed that Dendrocalamus and Bambusa are differentiated from each other. By considering another group, "i" identified according to the existence of separated vascular bundle sheath. Bambusa showed to have a separated vascular bundle sheath while Dendrocalamus does not have a separated vascular bundle sheath. Moreover, Bambusa showed peculiar characteristics in the figure of vascular development, i.e., one with an inward vascular bundle sheath and the other with a bivascular bundle sheath (inward and outward). In conclusion, the bamboo species used in this experiment were classified in group D, without any separated vascular bundle sheath, and in group E, with a vascular bundle sheath. Group E was divided into two groups, i.e., and group $E_1$, with bivascular sheath, and group $E_2$, with only an inward vascular sheath. Therefore, the Bambusa in group D as described by Grosser and Liese was included in group E. Dendrocalamus seemed to be the middle group between group $E_l$ and group $E_2$ under this classification system which is summarized as follows: Phyllostachys-type: Group A - Phyllostachys, Chymonobambus, Arundinaria, Pseudosasa, Pleioblastus, Yashania Pome-type: Group $B_2$ - Schizostachyum, Melocanna Hemp-type: Group D - Dendrocalamu Bambu-type: Group $E_1$ - Bambusa ghi.

  • PDF

Estimation of SCS Runoff Curve Number and Hydrograph by Using Highly Detailed Soil Map(1:5,000) in a Small Watershed, Sosu-myeon, Goesan-gun (SCS-CN 산정을 위한 수치세부정밀토양도 활용과 괴산군 소수면 소유역의 물 유출량 평가)

  • Hong, Suk-Young;Jung, Kang-Ho;Choi, Chol-Uong;Jang, Min-Won;Kim, Yi-Hyun;Sonn, Yeon-Kyu;Ha, Sang-Keun
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.43 no.3
    • /
    • pp.363-373
    • /
    • 2010
  • "Curve number" (CN) indicates the runoff potential of an area. The US Soil Conservation Service (SCS)'s CN method is a simple, widely used, and efficient method for estimating the runoff from a rainfall event in a particular area, especially in ungauged basins. The use of soil maps requested from end-users was dominant up to about 80% of total use for estimating CN based rainfall-runoff. This study introduce the use of soil maps with respect to hydrologic and watershed management focused on hydrologic soil group and a case study resulted in assessing effective rainfall and runoff hydrograph based on SCS-CN method in a small watershed. The ratio of distribution areas for hydrologic soil group based on detailed soil map (1:25,000) of Korea were 42.2% (A), 29.4% (B), 18.5% (C), and 9.9% (D) for HSG 1995, and 35.1% (A), 15.7% (B), 5.5% (C), and 43.7% (D) for HSG 2006, respectively. The ratio of D group in HSG 2006 accounted for 43.7% of the total and 34.1% reclassified from A, B, and C groups of HSG 1995. Similarity between HSG 1995 and 2006 was about 55%. Our study area was located in Sosu-myeon, Goesan-gun including an approx. 44 $km^2$-catchment, Chungchungbuk-do. We used a digital elevation model (DEM) to delineate the catchments. The soils were classified into 4 hydrologic soil groups on the basis of measured infiltration rate and a model of the representative soils of the study area reported by Jung et al. 2006. Digital soil maps (1:5,000) were used for classifying hydrologic soil groups on the basis of soil series unit. Using high resolution satellite images, we delineated the boundary of each field or other parcel on computer screen, then surveyed the land use and cover in each. We calculated CN for each and used those data and a land use and cover map and a hydrologic soil map to estimate runoff. CN values, which are ranged from 0 (no runoff) to 100 (all precipitation runs off), of the catchment were 73 by HSG 1995 and 79 by HSG 2006, respectively. Each runoff response, peak runoff and time-to-peak, was examined using the SCS triangular synthetic unit hydrograph, and the results of HSG 2006 showed better agreement with the field observed data than those with use of HSG 1995.

The influence of the four noted physicians of Geum-Won era on the completion of the medicine in the Chosun dynasty (금원사대가의학(金元四大家醫學)이 조선조의학(朝鮮朝醫學) 형성(形成)에 미친 영향(影響))

  • Cheong, Myeon;Hong, Won Sik
    • Journal of Korean Medical classics
    • /
    • v.9
    • /
    • pp.432-552
    • /
    • 1996
  • The influence of the four noted physicians of Geum-Won era(金元代) on the completion of the medicine in the Chosun dynasty(朝鮮朝) can be summarized as follows. 1. The four noted physicians of Geum-Won era were Yoo-Wan-So(劉完素), Jang-Jong-Jung(張從正), Lee-Go(李杲), Ju-Jin-Heung(朱震亨). 2. Yoo-Wan-So(劉完索) made his theory on the basic of Nae-Kyung("內經") and Sane-Han-Lon("傷寒論"), his idea of medicine was characterized in his books, for exemple, application of O-Oon-Yuk-Ki(五運六氣), Ju-Wha theory(主火論) and hang-hae-seng-je theory(亢害承制論). from his theory and method of study, many deviations of oriental medicine occurred. He made an effort for study of Nae-Kyung, which had been depressed for many years, on the contrary of the way old study that Nae-Kyung had been only explained or revised, he applied the theory of Nae-Kyung to clinical care. The theory of Yuk-Gi-Byung-Gi(六氣病機) and On-Yeul-Byung(溫熱病) had much influenced on his students and posterities, not to mention Jang-Ja-Wha and Ju-Jin-Heung, who were among the four noted physicians therefore he became the father of Yuk-Gi(六氣) and On-Yeul(溫熱) schools. 3. Jang-Jong-Jung(張從正) emulated Yoo-Wan-So as a model, and followed his Yuk-Gi-Chi-Byung(六氣致病) theory, but he insisted on the use of the chiaphoretic, the emetic and the paregoric to get rid of the causes, specially he insisted on the use of the paregoric, so they called him Gong-Ha-Pa(攻下派). He insisted on the theory that if we would strenthen ourselves we should use food, id get rid of cause, should use the paregoric, emetic and diaphoretic. Jang-Jong-Jung'S Gang-Sim-Wha(降心火) theory, which he improved Yoo-Wan-So's Han-Ryang(寒凉) theory influenced to originate Ju-Jin-Heung'S Ja-Eum-Gang-Wha(滋陰降火) theory. 4. Lee-Go(李杲) insisted on the theory that Bi-Wi(脾胃) played a loading role in the physiological function and pathological change, and that the internal disease was originated by the need of Gi(氣) came from the disorder of digestive organs, and that the causes of internal disease were the irregular meal, the overwork, and mental shock. Lee-Go made an effort for study about the struggle of Jung-Sa(正邪) and in the theory of the prescription he asserted the method of Seung-Yang-Bo-Gi(升陽補氣), but he also used the method of Go-Han-Gang-Wha(苦寒降火). 5. The authors of Eui-Hak-Jung-Jun("醫學正傳"), Eui-Hak-Ib-Moon("醫學入門"), and Man-Byung-Whoi-Choon("萬病回春") analyzed the medical theory of the four noted physicians and added their own experiences. They helped organizing existing complicated theories of the four noted physicians imported in our country, and affected the formation of medical science in the Choson dynasty largely. Eui-Hak-Jung-Jun("醫學正傳") was written by Woo-Dan(虞槫), in this book, he quoted the theories of Yoo-Wan-So, Jang-Jong-Jung, Lee-Go, Ju-Jin-Heung, especially, Ju-Jin-Heung was respected by him, it affected the writing of Eui-Lim-Choal-Yo("醫林撮要"). Eui-Hak-ib-Moon("醫學入門"), written by Lee-Chun(李杲), followed the medical science of Lee-Go and ju-jin-heung from the four noted physicians of Geum-Won era. Its characteristics of Taoism, idea of caring of health, and organization affected Dong-Eui-Bo-Kham("東醫寶鑑"). Gong-Jung-Hyun(龔延賢) wrote Man-Byung-Whoi-Choon("萬病回春") using the best part of the theories of Yoo-Wan-So, Jang-Jong-Jung, Lee-Go, Ju-Jin-Heung, this book affected Dong-Eui-Soo-Se-Bo-Won("東醫壽世保元") partly. 6. our medical science was developed from the experience of the treatment of disease obtained from human life, these medical knowledge was arranged and organized in Hyang-Yak-Jib-Sung-Bang("鄕藥集成方"), medical books imported from China was organized in Eui-Bang-Yoo-Chwi("醫方類聚"), which formed the base of medical development in the Chosun dynasty. 7. Eui-Lim-Choal-Yo("醫林撮要") was written by Jung-Kyung-Sun(鄭敬先) and revised by Yang-Yui-Soo(楊禮壽). It was written on the base of Woo-Dan's Eui-Jung-Jun, which compiled the medical science of the four noted physicians of Geum-Won era. It contained confusing theories of the four noted physicians of Geum-Won era and organized medical books of Myung era, therefore it completed the basic form of Byun-Geung-Non-Chi (辨證論治) influenced the writing of Dong-Eui-Bo-Kham("東醫寶鑑"). 8. Dong-Eui-Bo-Kham("東醫寶鑑") was written on the base of basic theory of Eum-Yang-O-Haeng(陰陽五行) and the theory of respondence of heaven and man(天人相應說) in Nae-Kyung. It contained several theories and knowledge, such as the theory of Essence(精), vitalforce(氣), and spirit(神) of Taoism, medical science of geum-won era, our original medical knowledge and experience. It had established the basic organization of our medical science and completed the Byun-Geung-Non-Chi (辨證論治). Dong-Eui-Bo-Kham developed medical science from simple medical treatment to protective medical science by caring of health. And it also discussed human cultivation and Huh-Joon's(許浚) own view of human life through the book. Dong-Eui-Bo-Kham adopted most part of Lee-Go(李杲) and Ju-Jin-Heung's(朱震亨) theory and new theory of "The kidney is the basis of apriority. The spleen is the basis of posterior", so it emphasized the role of spleen and kidney(脾腎) for Jang-Boo-Byung-Gi(臟腑病機). It contained Ju-Jin-Heung's theory of the cause and treatment of disease by colour or fatness of man(black or white, fat or thin). It also contained Ju-Jin-Heung's theory of "phlegm break out fever, fever break out palsy"(痰生熱 熱生風) and the theory of Sang-Wha(相火論). Dong-Eui-Bo-Kham contained Lee-Go's theory of Wha-Yu-Won-Bool-Yang-Lib (火與元氣不兩立論) quoted the theory of Bi-Wi(脾胃論) and the theory of Nae-Oi-Sang-Byun(內外傷辨). For the use of medicine, it followed the theory by Lee-Go. lt used Yoo-Wan-So'S theory of Oh-Gi-Kwa-Keug-Gae-Wi-Yul-Byung(五志過極皆爲熱病) for the treatment of hurt-spirit(傷神) because fever was considered as the cause of disease. It also used Jang-Jons-Jung's theory of Saeng-Keug-Je-Seung(生克制勝) for the treatment of mental disease. 9. Lee-je-ma's Dong-Eui-Soo-Se-Bo-Won("東醫壽世保元") adopted medical theories of Song-Won-Myung era and analyzed these theories using the physical constitutional theory of Sa-Sang-In(四象人). It added Dong-Mu's main idea to complete the theory and clinics of Sa-Sang-Eui-Hak(四象醫學). Lee-Je-Ma didn't quote the four noted physicians of Geum-Won era to discuss that the physical constitutional theory of disease and medicine from Tae-Eum-In(太陰人), So-Yang-In(少陽人), So-Eum-In(少陰人), and Tae-Yang-In(太陽人) was invented from their theories.

  • PDF

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.