• Title/Summary/Keyword: 3G Networks

Search Result 408, Processing Time 0.026 seconds

Application of Digital Content Technology for Veterans Diplomacy (디지털 콘텐츠 기술을 활용한 보훈외교의 발전 방향)

  • So, Byungsoo;Park, Hyungi
    • Journal of Public Diplomacy
    • /
    • v.3 no.2
    • /
    • pp.35-52
    • /
    • 2023
  • Korea has developed as an influential country over Asia and all over the world based on remarkable economic development. And the background of this development was possible due to the existence of those who sacrificed precious lives and contributed to the nation's existence in the past crisis. Every year, Korea holds an annual commemorative event with people of national merit, Korean War veterans, and their families, expressing gratitude for sacrifices and contributions at home and abroad, and providing economic support. The tragedy of the Korean War and the pro-democracy movement in Korea over the past half century will one day become a history of the distant past over time. As generations change and the purpose and method of exchange by region change, the tragic situation that occurred earlier and the way people sacrificed for the country are expected to be different from before. In particular, it is true that the number of Korean War veterans and their families is gradually decreasing as they are now old. In addition, due to the outbreak of global infectious diseases such as COVID-19, it is difficult to plan and conduct face to face events as well as before. Currently, Korea's digital technology is introducing various methods. 5G communication networks, smart-phones, tablet PCs, and smart devices that can experience virtual reality are already used in our real lives. Business meetings are held in a metaverse environment, and concerts by famous singers are held in an online environment. Artificial intelligence technology has also been introduced in the field of human resource recruitment and customer response services, improving the work efficiency of companies. And it seems that this technology can be used in the field of veterans. In particular, there is a metaverse technology that can vividly show the situation during the Korean War, and a way to digitalize the voices and facial expressions of currently surviving veterans to convey their memories and lessons to future generations in the long run. If this digital technology method is realized on an online platform to hold a veterans' celebration event, veterans and their families on the other side of the world will be able to participate in the event more conveniently.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Trend in Research and Application of Hard Carbon-based Thin Films (탄소계 경질 박막의 연구 및 산업 적용 동향)

  • Lee, Gyeong-Hwang;Park, Jong-Won;Yang, Ji-Hun;Jeong, Jae-In
    • Proceedings of the Korean Institute of Surface Engineering Conference
    • /
    • 2009.05a
    • /
    • pp.111-112
    • /
    • 2009
  • Diamond-like carbon (DLC) is a convenient term to indicate the compositions of the various forms of amorphous carbon (a-C), tetrahedral amorphous carbon (ta-C), hydrogenated amorphous carbon and tetrahedral amorphous carbon (a-C:H and ta-C:H). The a-C film with disordered graphitic ordering, such as soot, chars, glassy carbon, and evaporated a-C, is shown in the lower left hand corner. If the fraction of sp3 bonding reaches a high degree, such an a-C is denoted as tetrahedral amorphous carbon (ta-C), in order to distinguish it from sp2 a-C [2]. Two hydrocarbon polymers, that is, polyethylene (CH2)n and polyacetylene (CH)n, define the limits of the triangle in the right hand corner beyond which interconnecting C-C networks do not form, and only strait-chain molecules are formed. The DLC films, i.e. a-C, ta-C, a-C:H and ta-C:H, have some extreme properties similar to diamond, such as hardness, elastic modulus and chemical inertness. These films are great advantages for many applications. One of the most important applications of the carbon-based films is the coating for magnetic hard disk recording. The second successful application is wear protective and antireflective films for IR windows. The third application is wear protection of bearings and sliding friction parts. The fourth is precision gages for the automotive industry. Recently, exciting ongoing study [1] tries to deposit a carbon-based protective film on engine parts (e.g. engine cylinders and pistons) taking into account not only low friction and wear, but also self lubricating properties. Reduction of the oil consumption is expected. Currently, for an additional application field, the carbon-based films are extensively studied as excellent candidates for biocompatible films on biomedical implants. The carbon-based films consist of carbon, hydrogen and nitrogen, which are biologically harmless as well as the main elements of human body. Some in vitro and limited in vivo studies on the biological effects of carbon-based films have been studied [$2{\sim}5$].The carbon-based films have great potentials in many fields. However, a few technological issues for carbon-based film are still needed to be studied to improve the applicability. Aisenberg and Chabot [3] firstly prepared an amorphous carbon film on substrates remained at room temperature using a beam of carbon ions produced using argon plasma. Spencer et al. [4] had subsequently developed this field. Many deposition techniques for DLC films have been developed to increase the fraction of sp3 bonding in the films. The a-C films have been prepared by a variety of deposition methods such as ion plating, DC or RF sputtering, RF or DC plasma enhanced chemical vapor deposition (PECVD), electron cyclotron resonance chemical vapor deposition (ECR-CVD), ion implantation, ablation, pulsed laser deposition and cathodic arc deposition, from a variety of carbon target or gaseous sources materials [5]. Sputtering is the most common deposition method for a-C film. Deposited films by these plasma methods, such as plasma enhanced chemical vapor deposition (PECVD) [6], are ranged into the interior of the triangle. Application fields of DLC films investigated from papers. Many papers purposed to apply for tribology due to the carbon-based films of low friction and wear resistance. Figure 1 shows the percentage of DLC research interest for application field. The biggest portion is tribology field. It is occupied 57%. Second, biomedical field hold 14%. Nowadays, biomedical field is took notice in many countries and significantly increased the research papers. DLC films actually applied to many industries in 2005 as shown figure 2. The most applied fields are mold and machinery industries. It took over 50%. The automobile industry is more and more increase application parts. In the near future, automobile industry is expected a big market for DLC coating. Figure 1 Research interests of carbon-based filmsFigure 2 Demand ratio of DLC coating for industry in 2005. In this presentation, I will introduce a trend of carbon-based coating research and applications.

  • PDF

Evaluation of Future Turbidity Water and Eutrophication in Chungju Lake by Climate Change Using CE-QUAL-W2 (CE-QUAL-W2를 이용한 충주호의 기후변화에 따른 탁수 및 부영양화 영향평가)

  • Ahn, So Ra;Ha, Rim;Yoon, Sung Wan;Kim, Seong Joon
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.2
    • /
    • pp.145-159
    • /
    • 2014
  • This study is to evaluate the future climate change impact on turbidity water and eutrophication for Chungju Lake by using CE-QUAL-W2 reservoir water quality model coupled with SWAT watershed model. The SWAT was calibrated and validated using 11 years (2000~2010) daily streamflow data at three locations and monthly stream water quality data at two locations. The CE-QUAL-W2 was calibrated and validated for 2 years (2008 and 2010) water temperature, suspended solid, total nitrogen, total phosphorus, and Chl-a. For the future assessment, the SWAT results were used as boundary conditions for CE-QUAL-W2 model run. To evaluate the future water quality variation in reservoir, the climate data predicted by MM5 RCM(Regional Climate Model) of Special Report on Emissions Scenarios (SRES) A1B for three periods (2013~2040, 2041~2070 and 2071~2100) were downscaled by Artificial Neural Networks method to consider Typhoon effect. The RCM temperature and precipitation outputs and historical records were used to generate pollutants loading from the watershed. By the future temperature increase, the lake water temperature showed $0.5^{\circ}C$ increase in shallow depth while $-0.9^{\circ}C$ in deep depth. The future annual maximum sediment concentration into the lake from the watershed showed 17% increase in wet years. The future lake residence time above 10 mg/L suspended solids (SS) showed increases of 6 and 17 days in wet and dry years respectively comparing with normal year. The SS occupying rate of the lake also showed increases of 24% and 26% in both wet and dry year respectively. In summary, the future lake turbidity showed longer lasting with high concentration comparing with present behavior. Under the future lake environment by the watershed and within lake, the future maximum Chl-a concentration showed increases of 19 % in wet year and 3% in dry year respectively.

Media Habits of Sensation Seekers (감지추구자적매체습관(感知追求者的媒体习惯))

  • Blakeney, Alisha;Findley, Casey;Self, Donald R.;Ingram, Rhea;Garrett, Tony
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.2
    • /
    • pp.179-187
    • /
    • 2010
  • Understanding consumers' preferences and use of media types is imperative for marketing and advertising managers, especially in today's fragmented market. A clear understanding assists managers in making more effective selections of appropriate media outlets, yet individuals' choices of type and use of media are based on a variety of characteristics. This paper examines one personality trait, sensation seeking, which has not appeared in the literature examining "new" media preferences and use. Sensation seeking is a personality trait defined as "the need for varied, novel, and complex sensations and experiences and the willingness to take physical and social risks for the sake of such experiences" (Zuckerman 1979). Six hypotheses were developed from a review of the literature. Particular attention was given to the Uses and Gratification theory (Katz 1959), which explains various reasons why people choose media types and their motivations for using the different types of media. Current theory suggests that High Sensation Seekers (HSS), due to their needs for novelty, arousal and unconventional content and imagery, would exhibit higher frequency of use of new media. Specifically, we hypothesize that HSS will use the internet more than broadcast (H1a) or print media (H1b) and more than low (LSS) (H2a) or medium sensation seekers (MSS) (H2b). In addition, HSS have been found to be more social and have higher numbers of friends therefore are expected to use social networking websites such as Facebook/MySpace (H3) and chat rooms (H4) more than LSS (a) and MSS (b). Sensation seekers can manifest into a range of behaviors including disinhibition,. It is expected that alternative social networks such as Facebook/MySpace (H5) and chat rooms (H6) will be used more often for those who have higher levels of disinhibition than low (a) or medium (b) levels. Data were collected using an online survey of participants in extreme sports. In order to reach this group, an improved version of a snowball sampling technique, chain-referral method, was used to select respondents for this study. This method was chosen as it is regarded as being effective to reach otherwise hidden population groups (Heckathorn, 1997). A final usable sample of 1108 respondents, which was mainly young (56.36% under 34), male (86.1%) and middle class (58.7% with household incomes over USD 50,000) was consistent with previous studies on sensation seeking. Sensation seeking was captured using an existing measure, the Brief Sensation Seeking Scale (Hoyle et al., 2002). Media usage was captured by measuring the self reported usage of various media types. Results did not support H1a and b. HSS did not show higher levels of usage of alternative media such as the internet showing in fact lower mean levels of usage than all the other types of media. The highest media type used by HSS was print media, suggesting that there is a revolt against the mainstream. Results support H2a and b that HSS are more frequent users of the internet than LSS or MSS. Further analysis revealed that there are significant differences in the use of print media between HSS and LSS, suggesting that HSS may seek out more specialized print publications in their respective extreme sport activity. Hypothesis 3a and b showed that HSS use Facebook/MySpace more frequently than either LSS or MSS. There were no significant differences in the use of chat rooms between LSS and HSS, so as a consequence no support for H4a, although significant for MSS H4b. Respondents with varying levels of disinhibition were expected to have different levels of use of Facebook/MySpace and chat-rooms. There was support for the higher levels of use of Facebook/MySpace for those with high levels of disinhibition than low or medium levels, supporting H5a and b. Similarly there was support for H6b, Those with high levels of disinhibition use chat-rooms significantly more than those with medium levels but not for low levels (H6a). The findings are counterintuitive and give some interesting insights for managers. First, although HSS use online media more frequently than LSS or MSS, this groups use of online media is less than either print or broadcast media. The advertising executive should not place too much emphasis on online media for this important market segment. Second, social media, such as facebook/Myspace and chatrooms should be examined by managers as potential ways to reach this group. Finally, there is some implication for public policy by the higher levels of use of social media by those who are disinhibited. These individuals are more inclined to engage in more socially risky behavior which may have some dire implications, e.g. by internet predators or future employers. There is a limitation in the study in that only those who engage in extreme sports are included. This is by nature a HSS activity. A broader population is therefore needed to test if these results hold.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.