• Title/Summary/Keyword: Flash Set

Search Result 71, Processing Time 0.039 seconds

The Accuracy of the Table Movement During a Whole Body Scan (전신 영상 검사 시행 시 테이블 이동속도의 정확성에 관한 연구)

  • Lee, Ju-Young;Jung, Woo-Young;Jung, Eun-Mi;Dong, Kyung-Rae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.3
    • /
    • pp.86-91
    • /
    • 2009
  • Purpose: The whole body scan in Nuclear Medicine is a widely accepted examination and procedure. Especially, it is mainly used in bone, I-131, MIBI, and HMPAO WBC scans. The diverse uses of the whole body scan range from the HMPAO WBC scan with a speed of 13cm/min, to a whole body bone scan using the Onco. Flash technique with a speed of 30cm/min. The accuracy of table movement has a strong correlation with the image quality, and inaccuracy of speed could negatively affect the image quality. The purpose of this study is to evaluate the accuracy of the table movement while considering the influence of the age of the equipment and the variability in the weight of the patients. Material and Methods: The study was conducted using two of Seoul Asan Medical Center's SIEMENS gamma cameras which are commonly used in our whole body study. The first one is the oldest gamma camera, an ECAM plus (installed in 2000), and the last is brand new one, a SYMBIA T2 (installed in 2008). Three trials were conducted with the tables moving at a different speed each time; 10, 15 and 30 cm/min. The tables' speeds were measured by checking how long it took for the table to move 10cm, and this was repeated every 10cm until the table reached 100 cm. With an average body weight of the patients of about 60~70 kg, the table speed was measured with weights of 0 kg, 66 kg and 110 kg placed on the table, then compared among conditions. Results: The coefficient of variance (CV) of the ECAM plus showed 1.23, 1.42, 2.02 respectively when the table movement speeds were set at 10, 15, and 30 centimeters per minute. Under the same conditions, the SYMBIA T2 showed 1.23, 1.83 and 2.28 respectively. As table movement speed more, the variance of CV as the speed increases. When the patient body weight was set to 0, 66 and 110kg, the CV values of both cameras showed 0.96, 1.45, 2.08 (0 Kg), 1.32, 1.72, 2.27 (66 Kg) and 1.37, 1.73, 2.14 (110 Kg). There was no significant difference (p>0.05) in 95 percent of confidence intervals and measured CV values were acceptable. However, the CV value of the SYMBIA T2 was relatively larger than the ECAM plus. Conclusion: The scan speed of the whole body scan is predetermined based on which examination is being performed. It is possible for the accuracy of the speed to be affected, such as the age of the equipment, the state of the bearings or the weight of a patient. These factors can have a negative impact on the diagnostic consistency and the image quality. Therefore, periodic quality control should be needed on the gamma cameras currently being used, focusing on the table movement speed in order to maintain accuracy and reproducibility.

  • PDF

Comparison of Collimator Choice on Image Quality of I-131 in SPECT/CT (I-131 SPECT/CT 검사의 에서 조준기 종류에 따른 영상 비교 평가)

  • Kim, Jung Yul;Kim, Joo Yeon;Nam-Koong, Hyuk;Kang, Chun Goo;Kim, Jae Sam
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.33-42
    • /
    • 2014
  • Purpose: I-131 scan using High Energy (HE) collimator is generally used. While, Medium Energy (ME) collimator is not suggested to use in result of an excessive septal penetration effects, it is used to improve the sensitivities of count rate on lower dose of I-131. This research aims to evaluate I-131 SPECT/CT image quality using by HE and ME collimator and also find out the possibility of ME collimator clinical application. Materials and Methods: ME and HE collimator are substituted as Siemens symbia T16 SPECT/CT, using I-131 point source and NEMA NU-2 IQ phantom. Single Energy Window (SEW) and Triple Energy Windows (TEW) are applied for image acquisition and images with CTAC and Scatter correction application or not, applied different number of iteration and sub set are reconstructed by IR method, flash 3D. By analysis of acquired image, the comparison on sensitivities, contrast, noise and aspect ratio of two collimators are able to be evaluated. Results: ME Collimator is ahead of HE collimator in terms of sensitivity (ME collimator: 188.18 cps/MBq, HE collimator: 46.31 cps/MBq). For contrast, reconstruction image used by HE collimator with TEW, 16 subset 8 iteration applied CTAC is shown the highest contrast (TCQI=190.64). In same condition, ME collimator has lower contrast than HE collimator (TCQI=66.05). The lowest aspect ratio for ME collimator and HE collimator are 1.065 with SEW, CTAC (+) and 1.024 with TEW, CTAC (+) respectively. Conclusion: Selecting a proper collimator is important factor for image quality. This research finding tells that HE collimator, which is generally used for I-131 scan emitted high energy ${\gamma}$-ray is the most recommendable collimator for image quality. However, ME collimator is also applicable in condition of lower dose, lower sensitive if utilizing energy window, matrix size, IR parameter, CTAC and scatter correction appropriately.

  • PDF

Outlier Detection from High Sensitive Geiger Mode Imaging LIDAR Data retaining a High Outlier Ratio (높은 이상점 비율을 갖는 고감도 가이거모드 영상 라이다 데이터로부터 이상점 검출)

  • Kim, Seongjoon;Lee, Impyeong;Lee, Youngcheol;Jo, Minsik
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.5
    • /
    • pp.573-586
    • /
    • 2012
  • Point clouds acquired by a LIDAR(Light Detection And Ranging, also LADAR) system often contain erroneous points called outliers seeming not to be on physical surfaces, which should be carefully detected and eliminated before further processing for applications. Particularly in case of LIDAR systems employing with a Gieger-mode array detector (GmFPA) of high sensitivity, the outlier ratio is significantly high, which makes existing algorithms often fail to detect the outliers from such a data set. In this paper, we propose a method to discriminate outliers from a point cloud with high outlier ratio acquired by a GmFPA LIDAR system. The underlying assumption of this method is that a meaningful targe surface occupy at least two adjacent pixels and the ranges from these pixels are similar. We applied the proposed method to simulated LIDAR data of different point density and outlier ratio and analyzed the performance according to different thresholds and data properties. Consequently, we found that the outlier detection probabilities are about 99% in most cases. We also confirmed that the proposed method is robust to data properties and less sensitive to the thresholds. The method will be effectively utilized for on-line realtime processing and post-processing of GmFPA LIDAR data.

Content based Video Segmentation Algorithm using Comparison of Pattern Similarity (장면의 유사도 패턴 비교를 이용한 내용기반 동영상 분할 알고리즘)

  • Won, In-Su;Cho, Ju-Hee;Na, Sang-Il;Jin, Ju-Kyong;Jeong, Jae-Hyup;Jeong, Dong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.10
    • /
    • pp.1252-1261
    • /
    • 2011
  • In this paper, we propose the comparison method of pattern similarity for video segmentation algorithm. The shot boundary type is categorized as 2 types, abrupt change and gradual change. The representative examples of gradual change are dissolve, fade-in, fade-out or wipe transition. The proposed method consider the problem to detect shot boundary as 2-class problem. We concentrated if the shot boundary event happens or not. It is essential to define similarity between frames for shot boundary detection. We proposed 2 similarity measures, within similarity and between similarity. The within similarity is defined by feature comparison between frames belong to same shot. The between similarity is defined by feature comparison between frames belong to different scene. Finally we calculated the statistical patterns comparison between the within similarity and between similarity. Because this measure is robust to flash light or object movement, our proposed algorithm make contribution towards reducing false positive rate. We employed color histogram and mean of sub-block on frame image as frame feature. We performed the experimental evaluation with video dataset including set of TREC-2001 and TREC-2002. The proposed algorithm shows the performance, 91.84% recall and 86.43% precision in experimental circumstance.

The Evaluation of Clinical Usefulness on Application of Half-Time Acquisition Factor in Gated Cardiac Blood Pool Scan (게이트심장혈액풀 스캔에서 Half-Time 획득 인자 적용에 따른 임상적 유용성 평가)

  • Lee, Dong-Hun;Yoo, Hee-Jae;Lee, Jong-Hun;Jung, Woo-Young
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.3
    • /
    • pp.192-198
    • /
    • 2008
  • Purpose: The scan time reduction helps to yield more accurate results and induce the minimization of patient's motion. Also we can expect that satisfaction of examination will increase. Nowdays medical equipment companies have developed various programs to reduce scan time. We used Onco. Flash (Pixon method, SIEMENS) that is an image processing technique gated cardiac blood pool scan and going to evaluate its clinical usefullness. Materials and Method: We analyzed the 50 patients who were examined by gated blood pool scan in nuclear medicine department of Asan Mediacal Center from June $20^{th}$ 2008 to August $14^{th}$ 2008. We acquired the Full-time (6000 Kcounts) and Half-time (3000 Kcounts) LAO image in same position. And we acquired LVEF values ten times from Full-time, Half-time images acquired by the image processing technique and analyzed its mean and standard deviation values. To estimate LVEF in same conditions, we set automatic location of the LV ROI and background ROI based on same X and Y-axis. Also we performed blinding tests to physician. Results: After making a quantitative analysis of the 50 patients EF values, each mean${\pm}$standard deviation is shown at Full-time image $68.12{\pm}7.84%$, Half- time (acquired by imaging processing technique) $68.49{\pm}8.73%$. In the 95% confidence limit, there was no statistically significant difference (p>0.05). After blinding test with a physician for making a qualitative analysis, there was no difference between Full-time image and Half-time image acquired by the image processing technique for observing LV myocardial wall motion. Conclusion: Gated cardiac blood pool scan has been reported its relatively exact EF measured results than ultrasound or CT. But gated cardiac blood pool scan takes relatively longer time than other exams and now it needs to improve time competitive power. If we adapt Half-time technique to gated cardiac blood pool scintigraphy based on this study, we expect to reduce possible artifacts and improve accessibility as well as flexibility to exam. Also we expect patient's satisfaction.

  • PDF

Optimization and characterization of biodiesel produced from vegetable oil

  • Mustapha, Amina T.;Abdulkareem, Saka A.;Jimoh, Abdulfatai;Agbajelola, David O.;Okafor, Joseph O.
    • Advances in Energy Research
    • /
    • v.1 no.2
    • /
    • pp.147-163
    • /
    • 2013
  • The world faces several issues of energy crisis and environmental deterioration due to over-dependence on single source of which is fossil fuel. Though, fuel is needed as ingredients for industrial development and growth of any country, however the fossil fuel which is a major source of energy for this purpose has always been terrifying thus the need for alternative and renewable energy sources. The search for alternative energy sources resulted into the acceptance of a biofuel as a reliable alternative energy source. This work presents the study of optimization of process of transesterification of vegetable oil to biodiesel using NaOH as catalyst. A $2^4$ factorial design method was employed to investigate the influence of ratio of oil to methanol, temperature, NaOH concentration, and transesterification time on the yield of biodiesel from vegetable oil. Low and high levels of the key factors considered were 4:1 and 6:1 mole ratio, 30 and $60^{\circ}C$ temperatures, 0.5 and 1.0 wt% catalyst concentration, and 30 and 60 min reaction time. Results obtained revealed that oil to methanol molar ratio of 6:1, tranesetrification temperature of $60^{\circ}C$, catalyst concentration of 1.0wt % and reaction time of 30 min are the best operating conditions for the optimum yield of biofuel from vegetable oil, with optimum yield of 95.8%. Results obtained on the characterizzation of the produced biodiesel indicate that the specific gravity, cloud point, flash point, sulphur content, viscosity, diesel index, centane number, acid value, free glycerine, total glycerine and total recovery are 0.8899, 4, 13, 0.0087%, 4.83, 25, 54.6. 0.228mgKOH/g, 0.018, 0.23% and 96% respectively. Results also indicate that the qualities of the biodiesel tested for are in conformity with the set standard. A model equation was developed based on the results obtained using a statistical tool. Analysis of variance (ANOVA) of data shows that mole ratio of ground nut oil to methanol and transesterification time have the most pronounced effect on the biodiesel yield with contributions of 55.06% and 9.22% respectively. It can be inferred from the results various conducted that vegetable oil locally produced from groundnut oil can be utilized as a feedstock for biodiesel production.

Study on Characteristics of Change of Physical/Chemical Property in Domestic Aviation Fuel by the Quality Monitoring Analysis (국내 항공유(Jet A-1) 품질모니터링을 통한 물성 변화 특성 연구)

  • Doe, Jin-woo;Youn, Ju-min;Jeon, Hwa-yeon;Yim, Eui-soon;Lee, Joung-min;Kang, Hyung-kyu
    • Journal of the Korean Applied Science and Technology
    • /
    • v.35 no.4
    • /
    • pp.1327-1337
    • /
    • 2018
  • Aviation fuel oil is more strictly controlled than other transport fuels because it can lead to major accidents in the event of a problem. The quality standards of the aircraft are specified by the domestic Korean Standard, the American Society for Testing and Materials and the International Air Transport Association, respectively. From 2016 to 2017, the quality analysis of 6 items such as aromatic content, sulfur content and distillation characteristics was carried out on the jet fuel produced at five domestic refineries. Domestic production of jet fuel has been shown to be in conformity with the quality standards and has been maintained at a constant level throughout the year. Compared with the specification of ASTM and IATA the aromatic content of domestic KS specification is set to be strictly 1.5 wt% higher than the ASTM and IATA setting specification, but it satisfies this specification sufficiently. In addition, other items such as sulfur content, distillation property and flash point satisfied both domestic and international specification.

Development of a surrogate model based on temperature for estimation of evapotranspiration and its use for drought index applicability assessment (증발산 산정을 위한 온도기반의 대체모형 개발 및 가뭄지수 적용성 평가)

  • Kim, Ho-Jun;Kim, Kyoungwook;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.11
    • /
    • pp.969-983
    • /
    • 2021
  • Evapotranspiration, one of the hydrometeorological components, is considered an important variable for water resource planning and management and is primarily used as input data for hydrological models such as water balance models. The FAO56 PM method has been recommended as a standard approach to estimate the reference evapotranspiration with relatively high accuracy. However, the FAO56 PM method is often challenging to apply because it requires considerable hydrometeorological variables. In this perspective, the Hargreaves equation has been widely adopted to estimate the reference evapotranspiration. In this study, a set of parameters of the Hargreaves equation was calibrated with relatively long-term data within a Bayesian framework. Statistical index (CC, RMSE, IoA) is used to validate the model. RMSE for monthly results reduced from 7.94 ~ 24.91 mm/month to 7.94 ~ 24.91 mm/month for the validation period. The results confirmed that the accuracy was significantly improved compared to the existing Hargreaves equation. Further, the evaporative demand drought index (EDDI) based on the evaporative demand (E0) was proposed. To confirm the effectiveness of the EDDI, this study evaluated the estimated EDDI for the recent drought events from 2014 to 2015 and 2018, along with precipitation and SPI. As a result of the evaluation of the Han-river watershed in 2018, the weekly EDDI increased to more than 2 and it was confirmed that EDDI more effectively detects the onset of drought caused by heatwaves. EDDI can be used as a drought index, particularly for heatwave-driven flash drought monitoring and along with SPI.

Critical Analyses of '2nd Science Inquiry Experiment Contest' (과학탐구 실험대회의 문제점 분석)

  • Paik, Seoung-Hey
    • Journal of The Korean Association For Science Education
    • /
    • v.15 no.2
    • /
    • pp.173-184
    • /
    • 1995
  • The purpose of this study was to analyse the problems of 'Science Inquiry Experiment Contest(SIEC)' which was one of 8 programs of 'The 2nd Student Science Inquiry Olympic Meet(SSIOM)'. The results and conclusions of this study were as follows: 1. It needs to reconsider the role of practical work within science experiment because practical work skills form one of the mainstays in current science. But the assessment of students' laboratory skills in the contest was made little account of. It is necessary to remind of what it means to be 'good at science'. There are two aspects: knowing and doing. Both are important and, in certain respects, quite distinct. Doing science is more of a craft activity, relying more on craft skill and tacit knowledge than on the conscious application of explicit knowledge. Doing science is also divided into two aspects, 'process' and 'skill' by many science educators. 2. The report's and checklist's assessment items were overlapped. Therefore it was suggested that the checklist assessment items were set limit to the students' acts which can't be found in reports. It is important to identify those activities which produce a permanent assessable product, and those which do not. Skills connected with recording and reporting are likely to produce permanent evidence which can be evaluated after the experiment. Those connected with manipulative skills involving processes are more ephemeral and need to be assessed as they occur. The division of student's experimental skills will contribute to the accurate assess of student's scientific inquiry experimental ability. 3. There was a wide difference among the scores of one participant recorded by three evaluators. This means that there was no concrete discussion among the evaluators before the contest. Despite the items of the checklists were set by preparers of the contest experiments, the concrete discussions before the contest were necessary because students' experimental acts were very diverse. There is a variety of scientific skills. So it is necessary to assess the performance of individual students in a range of skills. But the most of the difficulties in the assessment of skills arise from the interaction between measurement and the use. To overcome the difficulties, not only must the mark needed for each skill be recorded, something which all examination groups obviously need, but also a description of the work that the student did when the skill was assessed must also be given, and not all groups need this. Fuller details must also be available for the purposes of moderation. This is a requirement for all students that there must be provision for samples of any end-product or other tangible form of evidence of candidates' work to be submitted for inspection. This is rather important if one is to be as fair as possible to students because, not only can this work be made available to moderators if necessary, but also it can be used to help in arriving at common standards among several evaluators, and in ensuring consistent standards from one evaluator over the assessment period. This need arises because there are problems associated with assessing different students on the same skill in different activities. 4. Most of the students' reports were assessed intuitively by the evaluators despite the assessment items were established concretely by preparers of the experiment. This result means that the evaluators were new to grasp the essence of the established assessment items of the experiment report and that the students' assessment scores were short of objectivity. Lastly, there are suggestions from the results and the conclusions. The students' experimental acts which were difficult to observe because they occur in a flash and which can be easily imitated should be excluded from the assessment items. Evaluators are likely to miss the time to observe the acts, and the students who are assessed later have more opportunity to practise the skill which is being assessed. It is necessary to be aware of these problems and try to reduce their influence or remove them. The skills and processes analysis has made a very useful checklist for scientific inquiry experiment assessment. But in itself it is of little value. It must be seen alongside the other vital attributes needed in the making of a good scientist, the affective aspects of commitment and confidence, the personal insights which come both through formal and informal learning, and the tacit knowledge that comes through experience, both structured and acquired in play. These four aspects must be continually interacting, in a flexible and individualistic way, throughout the scientific education of students. An increasing ability to be good at science, to be good at doing investigational practical work, will be gained through continually, successively, but often unpredictably, developing more experience, developing more insights, developing more skills, and producing more confidence and commitment.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.