• 제목/요약/키워드: Application Selection

Search Result 1,855, Processing Time 0.031 seconds

The Effect of Partially Used High Energy Photon on Intensity-modulated Radiation Therapy Plan for Head and Neck Cancer (두경부암 세기변조방사선치료 계획 시 부분적 고에너지 광자선 사용에 따른 치료계획 평가)

  • Chang, Nam Joon;Seok, Jin Yong;Won, Hui Su;Hong, Joo Wan;Choi, Ji Hun;Park, Jin Hong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.1-8
    • /
    • 2013
  • Purpose: A selection of proper energy in treatment planning is very important because of having different dose distribution in body as photon energy. In generally, the low energy photon has been used in intensity-modulated radiation therapy (IMRT) for head and neck (H&N) cancer. The aim of this study was to evaluate the effect of partially used high energy photon at posterior oblique fields on IMRT plan for H&N cancer. Materials and Methods: The study was carried out on 10 patients (nasopharyngeal cancer 5, tonsilar cancer 5) treated with IMRT in Seoul National University Bundang Hospital. CT images were acquired 3 mm of thickness in the same condition and the treatment plan was performed by Eclipse (Ver.7.1, Varian, Palo Alto, USA). Two plans were generated under same planing objectives, dose volume constraints, and eight fields setting: (1) The low energy plan (LEP) created using 6 MV beam alone, (2) the partially used high energy plan (PHEP) created partially using 15 MV beam at two posterior oblique fields with deeper penetration depths, while 6 MV beam was used at the rest of fields. The plans for LEP and PHEP were compared in terms of coverage, conformity index (CI) and homogeneity index (HI) for planning target volume (PTV). For organs at risk (OARs), $D_{mean}$ and $D_{50%}$ were analyzed on both parotid glands and $D_{max}$, $D_{1%}$ for spinal cord were analyzed. Integral dose (ID) and total monitor unit (MU) were compared as addition parameters. For the comparing dose to normal tissue of posterior neck, the posterior-normal tissue volume (P-NTV) was set on the patients respectively. The $D_{mean}$, $V_{20Gy}$ and $V_{25Gy}$ for P-NTV were evaluated by using dose volume histogram (DVH). Results: The dose distributions were similar with regard to coverage, CI and HI for PTV between the LEP and PHEP. No evident difference was observed in the spinal cord. However, the $D_{mean}$, $D_{50%}$ for both parotid gland were slightly reduced by 0.6%, 0.7% in PHEP. The ID was reduced by 1.1% in PHEP, and total MU for PHEP was 1.8% lower than that for LEP. In the P-NTV, the $D_{mean}$, $V_{20Gy}$ and $V_{25Gy}$ of the PHEP were 1.6%, 1.8% and 2.9% lower than those of LEP. Conclusion: Dose to some OARs and a normal tissue, total monitor unit were reduced in IMRT plan with partially used high energy photon. Although these reduction are unclear how have a clinical benefit to patient, application of the partially used high energy photon could improve the overall plan quality of IMRT for head and neck cancer.

  • PDF

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Directions of Implementing Documentation Strategies for Local Regions (지역 기록화를 위한 도큐멘테이션 전략의 적용)

  • Seol, Moon-Won
    • The Korean Journal of Archival Studies
    • /
    • no.26
    • /
    • pp.103-149
    • /
    • 2010
  • Documentation strategy has been experimented in various subject areas and local regions since late 1980's when it was proposed as archival appraisal and selection methods by archival communities in the United States. Though it was criticized to be too ideal, it needs to shed new light on the potentialities of the strategy for documenting local regions in digital environment. The purpose of this study is to analyse the implementation issues of documentation strategy and to suggest the directions for documenting local regions of Korea through the application of the strategy. The documentation strategy which was developed more than twenty years ago in mostly western countries gives us some implications for documenting local regions even in current digital environments. They are as follows; Firstly, documentation strategy can enhance the value of archivists as well as archives in local regions because archivist should be active shaper of history rather than passive receiver of archives according to the strategy. It can also be a solution for overcoming poor conditions of local archives management in Korea. Secondly, the strategy can encourage cooperation between collecting institutions including museums, libraries, archives, cultural centers, history institutions, etc. in each local region. In the networked environment the cooperation can be achieved more effectively than in traditional environment where the heavy workload of cooperative institutions is needed. Thirdly, the strategy can facilitate solidarity of various groups in local region. According to the analysis of the strategy projects, it is essential to collect their knowledge, passion, and enthusiasm of related groups to effectively implement the strategy. It can also provide a methodology for minor groups of society to document their memories. This study suggests the directions of documenting local regions in consideration of current archival infrastructure of Korean as follows; Firstly, very selective and intensive documentation should be pursued rather than comprehensive one for documenting local regions. Though it is a very political problem to decide what subject has priority for documentation, interests of local community members as well as professional groups should be considered in the decision-making process seriously. Secondly, it is effective to plan integrated representation of local history in the distributed custody of local archives. It would be desirable to implement archival gateway for integrated search and representation of local archives regardless of the location of archives. Thirdly, it is necessary to try digital documentation using Web 2.0 technologies. Documentation strategy as the methodology of selecting and acquiring archives can not avoid subjectivity and prejudices of appraiser completely. To mitigate the problems, open documentation system should be prepared for reflecting different interests of different groups. Fourth, it is desirable to apply a conspectus model used in cooperative collection management of libraries to document local regions digitally. Conspectus can show existing documentation strength and future documentation intensity for each participating institution. Using this, documentation level of each subject area can be set up cooperatively and effectively in the local regions.

Studies on the Pulping Characteristics of Larchwood (Larix leptolepis Gordon) by Alkaline Process with Additives (첨가제(添加劑) 알칼리 법(法)에 의한 일본 잎갈 나무의 펄프화(化) 특성(特性)에 관(關)한 연구(硏究))

  • Lim, Kie-Pyo;Shin, Dong-Sho
    • Journal of the Korean Wood Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.3-30
    • /
    • 1979
  • Larch ($\underline{Larix}$ $\underline{leptolepis}$ GORDON), one of the major afforestation species in Korea in view of its growing stock and rate of growth, is not favored as a raw material for pulp due to its low yield of pulp and difficulties with bleaching arising from the high content of extractives in wood, and the high heartwood ratio and the active phenolics, respectively. The purpose of this study is to investigate the characteristics of firstly pulping with various additives of cellulose protector for the yield of pulp, and secondly bleaching with oxygen for chlotination-alkali extraction of five stage-sequence to reduce chlorine compounds in bleaching effluents. The kraft cooking liquor for five age groups of larchwood was 18 percent active alkali with 25 percent sulfidity and 5 : 1 liquor-to-wood ratio, and each soda liquor for sap-and heart-wood of the 15-year-old larchwood was 18 percent alkali having one of the following cellulose protectors as the additive; magnesium sulfate ($MgSO_4$, 2.5%), zinc sulfate ($ZnSO_4$, 2.5%), aluminium sulfate ($Al_2(SO_4)_3$, 2.5%), potasium iodide (KI, 2.5%), hydroquinone (HQ, 2.5%), anthraquinone (AQ, 0.1%) and ethylene diamine (EDA, 2.5%). Then each anthraquinone-soda liquor for the determination of suitable cooking condition was the active alkali level of 15, 17 and 19 percent with 1.0, 0.5 and 0.1 percent anthraquinone, respectively. The cooking procedure for the pulps was scheduled to heat to 170$^{\circ}C$ in 90 minutes and to cook 90 minutes at the maximum temperature. The anthraquinone-soda pulps from both heartwood and sapwood of 15-year-old larchwood prepared with 0.5 percent anthraquinone and 18 percent active alkali were bleached in a four-stage sequency of OCED. (O: oxygen bleaching, D: chlorine dioxide bleaching and E: alkali extraction). In the first stage oxygen in atmospheric pressure was applied to a 30 percent consistency of pulp with 0.1 percent magnesium oxide (MgO) and 3, 6, and 9 percent sodium hydroxide on oven dry base, and the bleached results were compared pulps bleached under the conventional CEDED (C: chlorination). The results in the study were summarized as follows: 1. The screened yield of larch kraft pulp did not differ from particular ages to age group, but heartwood ratio, basic density, fiber length and water-extractives contents of wood and the tear factor of the pulp increased with increasing the tree age. The total yield of the pulp decreased. 2. The yield of soda pulp with various chemicals for cellulose protection of the 15-year-old larchwood increased slightly more than that of pure soda pulp and was slightly lower than that of kraft pulp. The influence of cellulose protectors was similar to the yield of pulps from both sapwood and heartwood. The effective protectors among seven additives were KI, $MgSO_4$ and AQ, for which the yields of screened pulp was as high as that of kraft pulp. Considering the additive level of protector, the AQ was the most effective in improving the yield and the quality of pulp. 3. When the amount of AQ increased in soda cooking, the yield and the quality of the pulp increased but rejects in total yield increased with decreasing the amount of active alkali from 19 to 15 percent. The best proportion of the AQ seemed to be 0.5 percent at 17 percent active alkali in anthraquinone-soda pulping. 4. On the bleaching of the AQ-soda pulp at 30 percent consistency with oxygen of atomospheric pressure in the first stage of the ODED sequence, the more caustic soda added, the brighter bleached pulp was obtained, but more lignin-selective bleaching reagent in proportion to the oxygen was necessary to maintain the increased yield with the addition of anthraquinone. 5. In conclusion, the suitable pulping condition for larchwood to improve the yield and quality of the chemical pulp to the level for kraft pulp from conventional process seemed to be. A) the selection of young larchwood to prevent decreasing in yield and quality due to the accumulation extractives in old wood, B) the application of 0.5 percent anthraquinone to the conventional soda cooking of 18 percent active alkali, and followed, C) the bleaching of oxygen in atmospheric pressure on high consistency (30%) with 0.1 percent magnesium oxide in the first stage of the ODED sequence to reduce the content of chlorine compounds in effluent.

  • PDF