• Title/Summary/Keyword: Performance Standard

Search Result 6,942, Processing Time 0.037 seconds

Incorporating Social Relationship discovered from User's Behavior into Collaborative Filtering (사용자 행동 기반의 사회적 관계를 결합한 사용자 협업적 여과 방법)

  • Thay, Setha;Ha, Inay;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.1-20
    • /
    • 2013
  • Nowadays, social network is a huge communication platform for providing people to connect with one another and to bring users together to share common interests, experiences, and their daily activities. Users spend hours per day in maintaining personal information and interacting with other people via posting, commenting, messaging, games, social events, and applications. Due to the growth of user's distributed information in social network, there is a great potential to utilize the social data to enhance the quality of recommender system. There are some researches focusing on social network analysis that investigate how social network can be used in recommendation domain. Among these researches, we are interested in taking advantages of the interaction between a user and others in social network that can be determined and known as social relationship. Furthermore, mostly user's decisions before purchasing some products depend on suggestion of people who have either the same preferences or closer relationship. For this reason, we believe that user's relationship in social network can provide an effective way to increase the quality in prediction user's interests of recommender system. Therefore, social relationship between users encountered from social network is a common factor to improve the way of predicting user's preferences in the conventional approach. Recommender system is dramatically increasing in popularity and currently being used by many e-commerce sites such as Amazon.com, Last.fm, eBay.com, etc. Collaborative filtering (CF) method is one of the essential and powerful techniques in recommender system for suggesting the appropriate items to user by learning user's preferences. CF method focuses on user data and generates automatic prediction about user's interests by gathering information from users who share similar background and preferences. Specifically, the intension of CF method is to find users who have similar preferences and to suggest target user items that were mostly preferred by those nearest neighbor users. There are two basic units that need to be considered by CF method, the user and the item. Each user needs to provide his rating value on items i.e. movies, products, books, etc to indicate their interests on those items. In addition, CF uses the user-rating matrix to find a group of users who have similar rating with target user. Then, it predicts unknown rating value for items that target user has not rated. Currently, CF has been successfully implemented in both information filtering and e-commerce applications. However, it remains some important challenges such as cold start, data sparsity, and scalability reflected on quality and accuracy of prediction. In order to overcome these challenges, many researchers have proposed various kinds of CF method such as hybrid CF, trust-based CF, social network-based CF, etc. In the purpose of improving the recommendation performance and prediction accuracy of standard CF, in this paper we propose a method which integrates traditional CF technique with social relationship between users discovered from user's behavior in social network i.e. Facebook. We identify user's relationship from behavior of user such as posts and comments interacted with friends in Facebook. We believe that social relationship implicitly inferred from user's behavior can be likely applied to compensate the limitation of conventional approach. Therefore, we extract posts and comments of each user by using Facebook Graph API and calculate feature score among each term to obtain feature vector for computing similarity of user. Then, we combine the result with similarity value computed using traditional CF technique. Finally, our system provides a list of recommended items according to neighbor users who have the biggest total similarity value to the target user. In order to verify and evaluate our proposed method we have performed an experiment on data collected from our Movies Rating System. Prediction accuracy evaluation is conducted to demonstrate how much our algorithm gives the correctness of recommendation to user in terms of MAE. Then, the evaluation of performance is made to show the effectiveness of our method in terms of precision, recall, and F1-measure. Evaluation on coverage is also included in our experiment to see the ability of generating recommendation. The experimental results show that our proposed method outperform and more accurate in suggesting items to users with better performance. The effectiveness of user's behavior in social network particularly shows the significant improvement by up to 6% on recommendation accuracy. Moreover, experiment of recommendation performance shows that incorporating social relationship observed from user's behavior into CF is beneficial and useful to generate recommendation with 7% improvement of performance compared with benchmark methods. Finally, we confirm that interaction between users in social network is able to enhance the accuracy and give better recommendation in conventional approach.

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

A Methodology to Develop a Curriculum based on National Competency Standards - Focused on Methodology for Gap Analysis - (국가직무능력표준(NCS)에 근거한 조경분야 교육과정 개발 방법론 - 갭분석을 중심으로 -)

  • Byeon, Jae-Sang;Ahn, Seong-Ro;Shin, Sang-Hyun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.43 no.1
    • /
    • pp.40-53
    • /
    • 2015
  • To train the manpower to meet the requirements of the industrial field, the introduction of the National Qualification Frameworks(hereinafter referred to as NQF) was determined in 2001 by National Competency Standards(hereinafter referred to as NCS) centrally of the Office for Government Policy Coordination. Also, for landscape architecture in the construction field, the "NCS -Landscape Architecture" pilot was developed in 2008 to be test operated for 3 years starting in 2009. Especially, as the 'realization of a competence-based society, not by educational background' was adopted as one of the major government projects in the Park Geun-Hye government(inaugurated in 2013) the NCS system was constructed on a nationwide scale as a detailed method for practicing this. However, in the case of the NCS developed by the nation, the ideal job performing abilities are specified, therefore there are weaknesses of not being able to reflect the actual operational problem differences in the student level between universities, problems of securing equipment and professors, and problems in the number of current curricula. For soft landing to practical curriculum, the process of clearly analyzing the gap between the current curriculum and the NCS must be preceded. Gap analysis is the initial stage methodology to reorganize the existing curriculum into NCS based curriculum, and based on the ability unit elements and performance standards for each NCS ability unit, the discrepancy between the existing curriculum within the department or the level of coincidence used a Likert scale of 1 to 5 to fill in and analyze. Thus, the universities wishing to operate NCS in the future measuring the level of coincidence and the gap between the current university curriculum and NCS can secure the basic tool to verify the applicability of NCS and the effectiveness of further development and operation. The advantages of reorganizing the curriculum through gap analysis are, first, that the government financial support project can be connected to provide quantitative index of the NCS adoption rate for each qualitative department, and, second, an objective standard is provided on the insufficiency or sufficiency when reorganizing to NCS based curriculum. In other words, when introducing in the subdivisions of the relevant NCS, the insufficient ability units and the ability unit elements can be extracted, and the supplementary matters for each ability unit element per existing subject can be extracted at the same time. There is an advantage providing directions for detailed class program and basic subject opening. The Ministry of Education and the Ministry of Employment and Labor must gather people from the industry to actively develop and supply the NCS standard a practical level to systematically reflect the requirements of the industrial field the educational training and qualification, and the universities wishing to apply NCS must reorganize the curriculum connecting work and qualification based on NCS. To enable this, the universities must consider the relevant industrial prospect and the relation between the faculty resources within the university and the local industry to clearly select the NCS subdivision to be applied. Afterwards, gap analysis must be used for the NCS based curriculum reorganization to establish the direction of the reorganization more objectively and rationally in order to participate in the process evaluation type qualification system efficiently.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Studies on the Determination Method of Natural Sweeteners in Foods - Licorice Extract and Erythritol (식품 중 감초추출물 및 에리스리톨 분석법에 관한 연구)

  • Hong Ki-Hyoung;Lee Tal-Soo;Jang Yaung-Mi;Park Sung-Kwan;Park Sung-Kug;Kwon Yong-Kwan;Jang Sun-Yaung;Han Ynun-Jeong;Won Hye-Jin;Hwang Hye-Shin;Kim Byung-Sub;Kim Eun-Jung;Kim Myung-Chul
    • Journal of Food Hygiene and Safety
    • /
    • v.20 no.4
    • /
    • pp.258-266
    • /
    • 2005
  • Licorice Extract and Erythritol, food additives used in korea, are widely used in foods as sweetener. Its application for use in food is regulated by the standard and specification for food additives but official analytical method far determination of these sweetener in food has not been established. Accordingly, we has been carried out to set up analytical method of the glycyrrhizic acid in several foods by the way of thin layer chromatography and high performance liquid chromatography glycyrrhizic acid is qualitative anaylsis technique consists of clean-up with a sep-pak $C_{18}$ cartridge, separation of the sweeteners by Silica gel 60 F254 TLC plate using 1-butanol:4Nammonia solution:ethanol (50:20:10) as mobile solvent. Also, the quantitative analysis for glycyrrhizic acid, was performed using Capcell prk $C_{18}$ column at wavelength 254nm and DW:Acetonitrile (62:38 (pH2.5)) as mobile phase. and we has been carried out to set up analytical method of the erythritol in several foods by the way of high performance liquid chromatography. erythritol is qualitative anaylsis technique consists of clean-up with a DW and hexane. The quantitative analysis for erythritol, was performed using Asahipak NH2P-50 column, Rl and DW:Acetonitrile (25:75) as mobile phase. The glycyrrhizic acid results determined as glycyrrhizic acid in 105 items were as follows; N.D$\∼$48.7ppm for 18 items in soy sauce, N.D$\∼$5.3ppm for 12 items in sauce, N.D$\∼$988.93ppm for 15 items in health food, N.D$\∼$180.7ppm for 26 items in beverages, N.D$\∼$2.6ppm for 8 items in alcoholic beverages repectively and ND for 63 items in the ethers. The erythritol results determined as erythritol in 52 items were as follows; N.D$\∼$155.6ppm for 13 items in gm, N.D$\∼$398.1ppm for 12 items in health foods repectively and ND for 45 items in the others.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Comparative Study of the Standard Uptake Values of the PET Reconstruction Methods; Using Contrast Enhanced CT and Non Contrast Enhanced CT (PET/CT 영상에서 조영제를 사용하지 않은 CT와 조영제를 사용한 CT를 이용한 감쇠보정에 따른 표준화섭취계수의 비교)

  • Lee, Seung-Jae;Park, Hoon-Hee;Ahn, Sha-Ron;Oh, Shin-Hyun;NamKoong, Heuk;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.3
    • /
    • pp.235-240
    • /
    • 2008
  • Purpose: At the beginning of PET/CT, Computed Tomography was mainly used only for Attenuation Correction (AC), but as the performance of the CT have been increase, it could give improved diagnostic information with Contrast Media. But it was controversial that Contrast Media could affect AC on PET/CT scan. Some submitted thesis' show that Contrast Media could overestimate when it is for AC data processing. On the contrary, the opinion that Contrast Media could be possible to affect the alteration of SUV because of the overestimated AC. But it does not have a definite effect on the diagnosis. Thus, the affection of Contrast Media on AC was investigated in this study. Materials and Methods: Patient inclusion criteria required a history of a malignancy and performance of an integrated PET/CT scan and contrast- enhanced CT scan within a 1-day period. Thirty oncologic patients who had PET/CT scan from December 2007 to June 2008 underwent staging evaluation and met these criteria. All patients fasted for at least 6 hr before the IV injection of approximately 5.6 MBq/kg (0.15 mCi/kg) of $^{18}F$-FDG and were scanned about 60 min after injection. All patients had a whole body PET/CT performed without IV contrast media followed by a contrast-enhanced CT on the Discovery STe PET/CT scanner. CT data were used for AC and PET images came out after AC. The ROIs drew and measured SUV. A paired t-test of these results was performed to assess the significance of the difference between the SUV obtained from the two attenuation corrected PET images. Results: The mean and maximum Standardized Uptake Values (SUV) for different regions averaged over all Patients. Comparing before using Contrast Media and after using, Most of ROIs have the increased SUV when it did Contrast Enhanced CT compare to Non-Contrast enhanced CT. All regions have increased SUV and also their p value was under 0.05 except the mean SUV of the Heart region. Conclusion: In this regard, the effect on SUV measurements that occurs when a contrast-enhanced CT is used for attenuation correction could have significant clinical ramifications. But some submitted thesis insisted that the percentage change in SUV that can determine or modify clinical management of oncology patients is small. Because there was not much difference that could be discovered by interpreter. But obviously the numerical change was occurred and on the stage finding primary region, small change would be base line, such as the region of liver which has greater change than the other regions needs more attention.

  • PDF

A Study on The Enhancement of Aviation Safety in Airport Planning & Construction from a Legal Perspective (공항개발계획과 사업에서의 항공안전성 제고에 대한 법률적 소고)

  • Kim, Tae-Han
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.27 no.2
    • /
    • pp.67-106
    • /
    • 2012
  • Today air traffic at the airport is complicated including a significant increase in the volume of air transport, so aviation accidents are constantly occurring. Therefore, we should newly recognize importance of the Air Traffic Safety, the core values of the Air Traffic. The location of airport that is the basic infrastructure of the air traffic and the security of safety for facilities and equipments are more important than what you can. From this dimension, I analyze the step-by-step safety factors that are taken into account in the airport development projects from the construction or improvement of the airport within the current laws and institutions and give my opinion on the enhancement of safety in the design and construction of airport. The safety of air traffic, as well as airport, depends on location, development, design, construction, inspection and management of the airport including airport facilities because we have to carry out the national responsibility that prevents the risk of large social overhead capital for many and unspecified persons in modern society through legislation regarding intervention of specialists and locational criteria for aviation safety from the planning stage of airport development. In addition, well-defined installation standards of airports and air navigation facilities, the key points of the airport development phase, can ensure the safety of the airport and airport facilities. Of course, the installation standards of airport and air navigation facilities are based on the global standard due to the nature of air traffic. However, to prevent the chaos for the safety standards in design, construction, inspection of them and to ensure the aviation safety, the safety standards must be further subdivided in the course of domestic legislation. The criteria for installation of the Air Navigation facilities is regulated most specifically. However, to ensure the safety of the operation for Air Navigation Facilities, performance system proved suitable for the Safety of Air Navigation Facilities must change over from arbitrary restrictions to mandatory restrictions and be applied for foreign producers as well as domestic producers. Of course, negligence of pilots and defective aircraft maintenance lead to a large portion of the aviation accidents. However, I think that air traffic accidents can be reduced if the airport or airport facility is perfect enough to ensure the safety. Therefore, legal and institutional supplement to prioritize the aviation safety from the stage of airport development may be necessary.

  • PDF

Recognition and Request for Medical Direction by 119 Emergency Medical Technicians (119 구급대원들이 지각하는 의료지도의 필요성 인식과 요구도)

  • Park, Joo-Ho
    • The Korean Journal of Emergency Medical Services
    • /
    • v.15 no.3
    • /
    • pp.31-44
    • /
    • 2011
  • Purpose : The purpose of emergency medical services(EMS) is to save human lives and assure the completeness of the body in emergency situations. Those who have been qualified on medical practice to perform such treatment as there is the risk of human life and possibility of major physical and mental injuries that could result from the urgency of time and invasiveness inflicted upon the body. In the emergency medical activities, 119 emergency medical technicians mainly perform the task but they are not able to perform such task independently and they are mandatory to receive medical direction. The purpose of this study is to examine the recognition and request for medical direction by 119 emergency medical technicians in order to provide basic information on the development of medical direction program suitable to the characteristics of EMS as well as for the studies on EMS for the sake of efficient operation of pre-hospital EMS. Method : Questionnaire via e-mail was conducted during July 1-31, 2010 for 675 participants who are emergency medical technicians, nurses and other emergency crews in Gyeongbuk. The effective 171 responses were used for the final analysis. In regards to the emergency medical technicians' scope of responsibilities defined in Attached Form 14, Enforcement regulations of EMS, t-test analysis was conducted by using the means and standard deviation of the level of request for medical direction on the scope of responsibilities of Level 1 & Level 2 emergency medical technicians as the scale of medical direction request. The general characteristics, experience result, the reason for necessity, emergency medical technicians & medical director request level, medical direction method, the place of work of the medical director, feedback content and improvement plan request level were analyzed through frequency and percentage. The level of experience in medical direction and necessity were analyzed through ${\chi}^2$ test. Results : In regards to the medical direction experience per qualification, the experience was the highest with 53.3% for Level 1 emergency medical technicians and 80.3% responded that experience was helpful. As for the recognition on the necessity of medical direction, 71.3% responded as "necessary" and it turned out to be the highest of 76.9% in nurses. As for the reason for responding "necessary", the reason for reducing the risk and side-effects from EMS for patients was the largest(75.4%), and the reason of EMS delay due to the request of medical direction was the highest(71.4%) for the reason for responding "not necessary". In regards to the request level of the task scope of emergency medical technicians, injection of certain amount of solution during a state of shock was the highest($3.10{\pm}.96$) for Level 1 emergency rescuers, and the endotracheal intubation was the highest($3.12{\pm}1.03$) for nurses, and the sublingual administration of nitroglycerine(NTG) during chest pain was the highest($2.62{\pm}1.02$) for Level 2 emergency medical technicians, and regulation of heartbeat using AED was the highest($2.76{\pm}.99$) for other emergency crews. For the revitalization of medical direction, the improvement in the capability of EMS(78.9%) was requested from emergency crew, and the ability to evaluate the medical state of patient was the highest(80.1%) in the level of request for medical director. The prehospital and direct medical direction was the highest(60.8%) for medical direction method, and the emergency medical facility was the highest(52.0%) for the placement of medical director, and the evaluation of appropriateness of EMS was the highest(66.1%) for the feedback content, and the reinforcement of emergency crew(emergency medical technicians) personnel was the highest(69.0%) for the improvement plan. Conclusion : The medical direction is an important policy in the prehospital EMS activity because 119 emergency medical technicians agreed the necessity of medical direction and over 80% of those who experienced medical direction said it was helpful. In addition, the simulation training program using algorithm and case study through feedback are necessary in order to enhance the technical capability of ambulance teams on the item of professional EMS with high level of request in the task scope of emergency medical technicians, and recognition of medical direction is the essence of the EMS field. In regards to revitalizing medical direction, the improvement of the task performance capability of 119 emergency medical technicians and medical directors, reinforcement of emergency medical activity personnel, assurance of trust between emergency medical technicians and the emergency physician, and search for professional operation plan of medical direction center are needed to expand the direct medical direction method for possible treatment beforehand through the participation by medical director even at the step in which emergency situation report is received.