• Title/Summary/Keyword: Quality evaluation

Search Result 13,983, Processing Time 0.045 seconds

Usefulness of Gated RapidArc Radiation Therapy Patient evaluation and applied with the Amplitude mode (호흡 동조 체적 세기조절 회전 방사선치료의 유용성 평가와 진폭모드를 이용한 환자적용)

  • Kim, Sung Ki;Lim, Hhyun Sil;Kim, Wan Sun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.29-35
    • /
    • 2014
  • Purpose : This study has already started commercial Gated RapidArc automation equipment which was not previously in the Gated radiation therapy can be performed simultaneously with the VMAT Gated RapidArc radiation therapy to the accuracy of the analysis to evaluate the usability, Amplitude mode applied to the patient. Materials and Methods : The analysis of the distribution of radiation dose equivalent quality solid water phantom and GafChromic film was used Film QA film analysis program using the Gamma factor (3%, 3 mm). Three-dimensional dose distribution in order to check the accuracy of Matrixx dosimetry equipment and Compass was used for dose analysis program. Periodic breathing synchronized with solid phantom signals Phantom 4D Phantom and Varian RPM was created by breathing synchronized system, free breathing and breath holding at each of the dose distribution was analyzed. In order to apply to four patients from February 2013 to August 2013 with liver cancer targets enough to get a picture of 4DCT respiratory cycle and then patients are pratice to meet patient's breathing cycle phase mode using the patient eye goggles to see the pattern of the respiratory cycle to be able to follow exactly in a while 4DCT images were acquired. Gated RapidArc treatment Amplitude mode in order to create the breathing cycle breathing performed three times, and then at intervals of 40% to 60% 5-6 seconds and breathing exercises that can not stand (Fig. 5), 40% While they are treated 60% in the interval Beam On hold your breath when you press the button in a way that was treated with semi-automatic. Results : Non-respiratory and respiratory rotational intensity modulated radiation therapy technique absolute calculation dose of using computerized treatment plan were shown a difference of less than 1%, the difference between treatment technique was also less than 1%. Gamma (3%, 3 mm) and showed 99% agreement, each organ-specific dose difference were generally greater than 95% agreement. The rotational intensity modulated radiation therapy, respiratory synchronized to the respiratory cycle created Amplitude mode and the actual patient's breathing cycle could be seen that a good agreement. Conclusion : When you are treated Non-respiratory and respiratory method between volumetric intensity modulated radiation therapy rotation of the absolute dose and dose distribution showed a very good agreement. This breathing technique tuning volumetric intensity modulated radiation therapy using a rotary moving along the thoracic or abdominal breathing can be applied to the treatment of tumors is considered. The actual treatment of patients through the goggles of the respiratory cycle to create Amplitude mode Gated RapidArc treatment equipment that does not automatically apply to the results about 5-6 seconds stopped breathing in breathing synchronized rotary volumetric intensity modulated radiation therapy facilitate could see complement.

Requirement and Perception of Parents on the Subject of Home Economics in Middle School (중학교 가정교과에 대한 학부모의 인식 및 요구도)

  • Shin Hyo-Shick;Park Mi-Soog
    • Journal of Korean Home Economics Education Association
    • /
    • v.18 no.3 s.41
    • /
    • pp.1-22
    • /
    • 2006
  • The purpose of this study is that I should look for a desirous directions about home economics by studying the requirements and perception of the high school parents who have finished the course of home economics. It was about 600 parents whom I have searched Seoul-Pusan, Ganwon. Ghynggi province, Choongcheong-Gyungsang province, Cheonla and Jeju province of 600, I chose only 560 as apparently suitable research. The questions include 61 requirements about home economics and one which we never fail to keep among the contents, whenever possible and one about the perception of home economics aims 11 about the perception of home economics courses and management. The collections were analyzed frequency, percent, mean. standard deviation t-test by using SAS program. The followings is the summary result of studying of it. 1. All the boys and girls learning together about the Idea of healthy lives and desirous human formulation and knowledge together are higher. 2. Among the teaching purposes of home economics, the item of the scientific principle and knowledge for improvements of home life shows 15.7% below average value. 3. The recognition degree about the quality of home economics is highly related with the real life, and about the system. we recognize lacking in periods and contents of home economics field and about guiding content, accomplishment and application qualities are higher regardless of sex. 4. The important term which we should emphasize in the subject of home economics is family part. 5. Among the needs of home economic requirement in freshman, in the middle unit, their growth and development are higher than anything else, representing 4.11, and by contrast the basic principle and actuality is 3.70, which is lowest among them. 6. In the case of second grade requirement of home economics content for parents in the middle unit young man and consuming life is 4.09 highest. 7. In the case of 3rd grade requirement of economics contents in the middle unit the choice of coming direction and job ethics is highest 4.16, and preparing meals and evaluation is lowest 3.50.

  • PDF

DC Resistivity method to image the underground structure beneath river or lake bottom (하저 지반특성 규명을 위한 전기비저항 탐사)

  • Kim Jung-Ho;Yi Myeong-Jong;Song Yoonho;Cho Seong-Jun;Lee Seong-Kon;Son Jeongsul
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2002.09a
    • /
    • pp.139-162
    • /
    • 2002
  • Since weak zones or geological lineaments are likely to be eroded, weak zones may develop beneath rivers, and a careful evaluation of ground condition is important to construct structures passing through a river. Dc resistivity surveys, however, have seldomly applied to the investigation of water-covered area, possibly because of difficulties in data aquisition and interpretation. The data aquisition having high quality may be the most important factor, and is more difficult than that in land survey, due to the water layer overlying the underground structure to be imaged. Through the numerical modeling and the analysis of case histories, we studied the method of resistivity survey at the water-covered area, starting from the characteristics of measured data, via data acquisition method, to the interpretation method. We unfolded our discussion according to the installed locations of electrodes, ie., floating them on the water surface, and installing at the water bottom, since the methods of data acquisition and interpretation vary depending on the electrode location. Through this study, we could confirm that the dc resistivity method can provide the fairly reasonable subsurface images. It was also shown that installing electrodes at the water bottom can give the subsurface image with much higher resolution than floating them on the water surface. Since the data acquired at the water-covered area have much lower sensitivity to the underground structure than those at the land, and can be contaminated by the higher noise, such as streaming potential, it would be very important to select the acquisition method and electrode array being able to provide the higher signal-to-noise ratio data as well as the high resolving power. The method installing electrodes at the water bottom is suitable to the detailed survey because of much higher resolving power, whereas the method floating them, especially streamer dc resistivity survey, is to the reconnaissance survey owing of very high speed of field work.

  • PDF

Assessment Study on Educational Programs for the Gifted Students in Mathematics (영재학급에서의 수학영재프로그램 평가에 관한 연구)

  • Kim, Jung-Hyun;Whang, Woo-Hyung
    • Communications of Mathematical Education
    • /
    • v.24 no.1
    • /
    • pp.235-257
    • /
    • 2010
  • Contemporary belief is that the creative talented can create new knowledge and lead national development, so lots of countries in the world have interest in Gifted Education. As we well know, U.S.A., England, Russia, Germany, Australia, Israel, and Singapore enforce related laws in Gifted Education to offer Gifted Classes, and our government has also created an Improvement Act in January, 2000 and Enforcement Ordinance for Gifted Improvement Act was also announced in April, 2002. Through this initiation Gifted Education can be possible. Enforcement Ordinance was revised in October, 2008. The main purpose of this revision was to expand the opportunity of Gifted Education to students with special education needs. One of these programs is, the opportunity of Gifted Education to be offered to lots of the Gifted by establishing Special Classes at each school. Also, it is important that the quality of Gifted Education should be combined with the expansion of opportunity for the Gifted. Social opinion is that it will be reckless only to expand the opportunity for the Gifted Education, therefore, assessment on the Teaching and Learning Program for the Gifted is indispensible. In this study, 3 middle schools were selected for the Teaching and Learning Programs in mathematics. Each 1st Grade was reviewed and analyzed through comparative tables between Regular and Gifted Education Programs. Also reviewed was the content of what should be taught, and programs were evaluated on assessment standards which were revised and modified from the present teaching and learning programs in mathematics. Below, research issues were set up to assess the formation of content areas and appropriateness for Teaching and Learning Programs for the Gifted in mathematics. A. Is the formation of special class content areas complying with the 7th national curriculum? 1. Which content areas of regular curriculum is applied in this program? 2. Among Enrichment and Selection in Curriculum for the Gifted, which one is applied in this programs? 3. Are the content areas organized and performed properly? B. Are the Programs for the Gifted appropriate? 1. Are the Educational goals of the Programs aligned with that of Gifted Education in mathematics? 2. Does the content of each program reflect characteristics of mathematical Gifted students and express their mathematical talents? 3. Are Teaching and Learning models and methods diverse enough to express their talents? 4. Can the assessment on each program reflect the Learning goals and content, and enhance Gifted students' thinking ability? The conclusions are as follows: First, the best contents to be taught to the mathematical Gifted were found to be the Numeration, Arithmetic, Geometry, Measurement, Probability, Statistics, Letter and Expression. Also, Enrichment area and Selection area within the curriculum for the Gifted were offered in many ways so that their Giftedness could be fully enhanced. Second, the educational goals of Teaching and Learning Programs for the mathematical Gifted students were in accordance with the directions of mathematical education and philosophy. Also, it reflected that their research ability was successful in reaching the educational goals of improving creativity, thinking ability, problem-solving ability, all of which are required in the set curriculum. In order to accomplish the goals, visualization, symbolization, phasing and exploring strategies were used effectively. Many different of lecturing types, cooperative learning, discovery learning were applied to accomplish the Teaching and Learning model goals. For Teaching and Learning activities, various strategies and models were used to express the students' talents. These activities included experiments, exploration, application, estimation, guess, discussion (conjecture and refutation) reconsideration and so on. There were no mention to the students about evaluation and paper exams. While the program activities were being performed, educational goals and assessment methods were reflected, that is, products, performance assessment, and portfolio were mainly used rather than just paper assessment.

An Empirical Study on Motivation Factors and Reward Structure for User's Createve Contents Generation: Focusing on the Mediating Effect of Commitment (창의적인 UCC 제작에 영향을 미치는 동기 및 보상 체계에 대한 연구: 몰입에 매개 효과를 중심으로)

  • Kim, Jin-Woo;Yang, Seung-Hwa;Lim, Seong-Taek;Lee, In-Seong
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.141-170
    • /
    • 2010
  • User created content (UCC) is created and shared by common users on line. From the user's perspective, the increase of UCCs has led to an expansion of alternative means of communications, while from the business perspective UCCs have formed an environment in which an abundant amount of new contents can be produced. Despite outward quantitative growth, however, many aspects of UCCs do not meet the expectations of general users in terms of quality, and this can be observed through pirated contents and user-copied contents. The purpose of this research is to investigate effective methods for fostering production of creative user-generated content. This study proposes two core elements, namely, reward and motivation, which are believed to enhance content creativity as well as the mediating factor and users' committement, which will be effective for bridging the increasing motivation and content creativity. Based on this perspective, this research takes an in-depth look at issues related to constructing the dimensions of reward and motivation in UCC services for creative content product, which are identified in three phases. First, three dimensions of rewards have been proposed: task dimension, social dimension, and organizational dimention. The task dimension rewards are related to the inherent characteristics of a task such as writing blog articles and pasting photos. Four concrete ways of providing task-related rewards in UCC environments are suggested in this study, which include skill variety, task significance, task identity, and autonomy. The social dimensioni rewards are related to the connected relationships among users. The organizational dimension consists of monetary payoff and recognition from others. Second, the two types of motivations are suggested to be affected by the diverse rewards schemes: intrinsic motivation and extrinsic motivation. Intrinsic motivation occurs when people create new UCC contents for its' own sake, whereas extrinsic motivation occurs when people create new contents for other purposes such as fame and money. Third, commitments are suggested to work as important mediating variables between motivation and content creativity. We believe commitments are especially important in online environments because they have been found to exert stronger impacts on the Internet users than other relevant factors do. Two types of commitments are suggested in this study: emotional commitment and continuity commitment. Finally, content creativity is proposed as the final dependent variable in this study. We provide a systematic method to measure the creativity of UCC content based on the prior studies in creativity measurement. The method includes expert evaluation of blog pages posted by the Internet users. In order to test the theoretical model of our study, 133 active blog users were recruited to participate in a group discussion as well as a survey. They were asked to fill out a questionnaire on their commitment, motivation and rewards of creating UCC contents. At the same time, their creativity was measured by independent experts using Torrance Tests of Creative Thinking. Finally, two independent users visited the study participants' blog pages and evaluated their content creativity using the Creative Products Semantic Scale. All the data were compiled and analyzed through structural equation modeling. We first conducted a confirmatory factor analysis to validate the measurement model of our research. It was found that measures used in our study satisfied the requirement of reliability, convergent validity as well as discriminant validity. Given the fact that our measurement model is valid and reliable, we proceeded to conduct a structural model analysis. The results indicated that all the variables in our model had higher than necessary explanatory powers in terms of R-square values. The study results identified several important reward shemes. First of all, skill variety, task importance, task identity, and automony were all found to have significant influences on the intrinsic motivation of creating UCC contents. Also, the relationship with other users was found to have strong influences upon both intrinsic and extrinsic motivation. Finally, the opportunity to get recognition for their UCC work was found to have a significant impact on the extrinsic motivation of UCC users. However, different from our expectation, monetary compensation was found not to have a significant impact on the extrinsic motivation. It was also found that commitment was an important mediating factor in UCC environment between motivation and content creativity. A more fully mediating model was found to have the highest explanation power compared to no-mediation or partially mediated models. This paper ends with implications of the study results. First, from the theoretical perspective this study proposes and empirically validates the commitment as an important mediating factor between motivation and content creativity. This result reflects the characteristics of online environment in which the UCC creation activities occur voluntarily. Second, from the practical perspective this study proposes several concrete reward factors that are germane to the UCC environment, and their effectiveness to the content creativity is estimated. In addition to the quantitive results of relative importance of the reward factrs, this study also proposes concrete ways to provide the rewards in the UCC environment based on the FGI data that are collected after our participants finish asnwering survey questions. Finally, from the methodological perspective, this study suggests and implements a way to measure the UCC content creativity independently from the content generators' creativity, which can be used later by future research on UCC creativity. In sum, this study proposes and validates important reward features and their relations to the motivation, commitment, and the content creativity in UCC environment, which is believed to be one of the most important factors for the success of UCC and Web 2.0. As such, this study can provide significant theoretical as well as practical bases for fostering creativity in UCC contents.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

A Study on the Evaluation of Fertilizer Loss in the Drainage(Waste) Water of Hydroponic Cultivation, Korea (수경재배 유출 배액(폐양액)의 비료 손실량 평가 연구)

  • Jinkwan Son;Sungwook Yun;Jinkyung Kwon;Jihoon Shin;Donghyeon Kang;Minjung Park;Ryugap Lim
    • Journal of Wetlands Research
    • /
    • v.25 no.1
    • /
    • pp.35-47
    • /
    • 2023
  • Korean facility horticulture and hydroponic cultivation methods increase, requiring the management of waste water generated. In this study, the amount of fertilizer contained in the discharged waste liquid was determined. By evaluating this as a price, it was suggested to reduce water treatment costs and recycle fertilizer components. It was evaluated based on the results of major water quality analysis of waste liquid by crop, such as tomatoes, paprika, cucumbers, and strawberries, and in the case of P component, it was analyzed by converting it to the amount of phosphoric acid (P2O5). The amount of nitrogen (N) can be calculated by discharging 1,145.90kg·ha-1 of tomatoes, 920.43kg·ha-1 of paprika, 804.16kg·ha-1 of cucumbers, 405.83kg·ha-1 of strawberries, and the fertilizer content of P2O5 is 830.65kg·ha-1 of paprika, 622.32kg·ha-1 of tomatoes, 477.67kg·ha-1 of cucumbers. In addition, trace elements such as potassium (K), calcium (Ca), magnesium (Mg), iron (Fe), and manganese (Mn) were also analyzed to be emitted. The price per kg of each item calculated by averaging the price of fertilizer sold on the market can be evaluated as KRW, N 860.7, P 2,378.2, K 2,121.7, Ca 981.2, Mg 1,036.3, Fe 126,076.9, Mn 62,322.1, Zn 15,825.0, Cu 31,362.0, B 4,238.0, Mo 149,041.7. The annual fertilizer loss amount for each crop was calculated by comprehensively considering the price per kg calculated based on the market price of fertilizer, the concentration of waste by crop analyzed earlier, and the average annual emission of hydroponic cultivation. As a result of the analysis, the average of the four hydroponic crops was 5,475,361.1 won in fertilizer ingredients, with tomatoes valued at 6,995,622.3 won, paprika valued at 7,384,923.8 won, cucumbers valued at 5,091,607.9 won, and strawberries valued at 2,429,290.6 won. It was expected that if hydroponic drainage is managed through self-treatment or threshing before discharge rather than by leaking it into a river and treating it as a pollutant, it can be a valuable reusable fertilizer ingredient along with reducing water treatment costs.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Evaluation of Proper Image Acquisition Time by Change of Infusion dose in PET/CT (PET/CT 검사에서 주입선량의 변화에 따른 적정한 영상획득시간의 평가)

  • Kim, Chang Hyeon;Lee, Hyun Kuk;Song, Chi Ok;Lee, Gi Heun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.2
    • /
    • pp.22-27
    • /
    • 2014
  • Purpose There is the recent PET/CT scan in tendency that use low dose to reduce patient's exposure along with development of equipments. We diminished $^{18}F$-FDG dose of patient to reduce patient's exposure after setting up GE Discovery 690 PET/CT scanner (GE Healthcare, Milwaukee, USA) establishment at this hospital in 2011. Accordingly, We evaluate acquisition time per proper bed by change of infusion dose to maintain quality of image of PET/CT scanner. Materials and Methods We inserted Air, Teflon, hot cylinder in NEMA NU2-1994 phantom and maintained radioactivity concentration based on the ratio 4:1 of hot cylinder and back ground activity and increased hot cylinder's concentration to 3, 4.3, 5.5, 6.7 MBq/kg, after acquisition image as increase acquisition time per bed to 30 seconds, 1 minute, 1 minute 30 seconds, 2 minute, 2 minutes 30 seconds, 3 minutes, 3 minutes 30 seconds, 4 minutes, 4 minutes 30 seconds, 5 minutes, 5 minutes 30 seconds, 10 minutes, 20 minutes, and 30 minutes, ROI was set up on hot cylinder and back radioactivity region. We computated standard deviation of Signal to Noise Ratio (SNR) and BKG (Background), compared with hot cylinder's concentration and change by acquisition time per bed, after measured Standard Uptake Value maximum ($SUV_{max}$). Also, we compared each standard deviation of $SUV_{max}$, SNR, BKG following in change of inspection waiting time (15minutes and 1 hour) by using 4.3 MBq phantom. Results The radioactive concentration per unit mass was increased to 3, 4.3, 5.5, 6.7 MBqs. And when we increased time/bed of each concentration from 1 minute 30 seconds to 30 minutes, we found that the $SUV_{max}$ of hot cylinder acquisition time per bed changed seriously according to each radioactive concentration in up to 18.3 to at least 7.3 from 30 seconds to 2 minutes. On the other side, that displayed changelessly at least 5.6 in up to 8 from 2 minutes 30 seconds to 30 minutes. SNR by radioactive change per unit mass was fixed to up to 0.49 in at least 0.41 in 3 MBqs and accroding as acquisition time per bed increased, rose to up to 0.59, 0.54 in each at least 0.23, 0.39 in 4.3 MBqs and in 5.5 MBqs. It was high to up to 0.59 from 30 seconds in radioactivity concentration 6.7 MBqs, but kept fixed from 0.43 to 0.53. Standard deviation of BKG (Background) was low from 0.38 to 0.06 in 3 MBqs and from 2 minutes 30 seconds after, low from 0.38 to 0 in 4.3 MBqs and 5.5 MBqs from 1 minute 30 seconds after, low from 0.33 to 0.05 in 6.7 MBqs at all section from 30 seconds to 30 minutes. In result that was changed the inspection waiting time to 15 minutes and 1 hour by 4.3 MBq phantoms, $SUV_{max}$ represented each other fixed values from 2 minutes 30 seconds of acquisition time per bed and SNR shown similar values from 1 minute 30 seconds. Conclusion As shown in the above, when we increased radioactive concentration per unit mass by 3, 4.3, 5.5, 6.7 MBqs, the values of $SUV_{max}$ and SNR was kept changelessly each other more than 2 minutes 30 seconds of acquisition time per bed. In the same way, in the change of inspection waiting time (15 minutes and 1 hour), we could find that the values of $SUV_{max}$ and SNR was kept changelessly each other more than 2 minutes 30 seconds of acquisition time per bed. In the result of this NEMA NU2-1994 phantom experiment, we found that the minimum acquisition time per bed was 2 minutes 30 seconds for evaluating values of fixed $SUV_{max}$ and SNR even in change of inserting radioactive concentration. However, this acquisition time can be different according to features and qualities of equipment.

  • PDF