• Title/Summary/Keyword: E2E performance

Search Result 3,820, Processing Time 0.049 seconds

Feasibility Study of Wetland-pond Systems for Water Quality Improvement and Agricultural Reuse (습지-연못 연계시스템에 의한 수질개선과 농업적 재이용 타당성 분석)

  • Jang, Jae-Ho;Jung, Kwang-Wook;Ham, Jong-Hwa;Yoon, Chun-Gyeong
    • Korean Journal of Ecology and Environment
    • /
    • v.37 no.3 s.108
    • /
    • pp.344-354
    • /
    • 2004
  • A pilot study was performed from September 2000 to April 2004 to examine the feasibility of the wetland-pond system for the agricultural reuse of reclaimed water. The wetland system was a subsurface flow type, with a hydraulic residence time of 3.5 days, and the subsequent pond was 8 $m^3$ in volume (2 m ${\times}$ 2 m ${\times}$ 2 m) and operated with intermittent-discharge and continuous flow types. The wetland system was effective in treating the sewage; median removal efficiencies of $BOD_5$ and TSS were above 70.0%, with mean effluent concentrations of 27.1 and 16.8 mg $L^{-1}$, respectively, for these constituents. However, they did often exceed the effluent water quality standards of 20 mg $L^{-1}$. Removal of T-N and T-P was relatively less effective and mean effluent concentrations were approximately 103.2 and 7.2 mg $L^{-1}$, respectively. The wetland system demonstrated high removal rate (92 ${\sim}$ 90%) of microorganisms, but effluent concentrations were in the range of 300 ${\sim}$ 16,000 MPN 100 $mL^{-1}$ which is still high for agricultural reuse. The subsequent pond system provided further treatment of the wetland effluent, and especially additional microorganisms removal in addition to wetland-pond system could reduce the mean concentration to 1,000 MPN 100 $mL^{-1}$ from about $10^5$ MPN 100 $mL^{-1}$ of wetland influent. Other parameters in the pond system showed seasonal variation, and the upper layer of the pond water column became remarkably clear immediately after ice melt. Overall, the wetland system was found to be adequate for treating sewage with stable removal efficiency, and the subsequent pond was effective for further polishing. This study concerned agricultural reuse of reclaimed water using natural systems. Considering stable performance and effective removal of bacterial indicators as well as other water quality parameters, low maintenance, and cost-effectiveness, wetland- pond system was thought to be an effective and feasible alternative for agricultural reuse of reclaimed water in rural area.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

A Case Study of the Performance and Success Factors of ISMP(Information Systems Master Plan) (정보시스템 마스터플랜(ISMP) 수행 성과와 성공요인에 관한 사례연구)

  • Park, So-Hyun;Lee, Kuk-Hie;Gu, Bon-Jae;Kim, Min-Seog
    • Information Systems Review
    • /
    • v.14 no.1
    • /
    • pp.85-103
    • /
    • 2012
  • ISMP is a method of writing clearly the user requirements in the RFP(Request for Proposal) of the IS development projects. Unlike the conventional methods of RFP preparation that describe the user requirements of target systems in a rather superficial manner, ISMP systematically identifies the businesses needs and the status of information technology, analyzes in detail the user requirements, and defines in detail the specific functions of the target systems. By increasing the clarity of RFP, the scale and complexity of related businesses can be calculated accurately, many responding companies can prepare proposals clearly, and the level of fairness during the evaluation of many proposals can be improved, as well. Above all though, the problems that are posed as chronic challenges in this field, i.e., the misunderstanding and conflicts between the users and developers, excessive burden on developers, etc. can be resolved. This study is a case study that analyzes the execution process, execution accomplishment, problems, and the success factors of two pilot projects that introduced ISMP for the first time. ISMP performance procedures of actual site were verified, and how the user needs in the request for quote are described was examined. The satisfaction levels of ISMP RFP for quote were found to be high as compared to the conventional RFP. Although occurred were some problems such as RFP preparation difficulties, increased workload, etc. due to the lack of understanding and execution experience of ISMP, in overall, also occurred were some positive effects such as the establishment of the scope of target systems, improved information sharing and cooperation between the users and the developers, seamless communication between issuing customer corporations and IT service companies, reduction of changes in user requirements, etc. As a result of conducting action research type in-depth interviews on the persons in charge of actual work, factors were derived as ISMP success factors: prior consensus on the need for ISMP, the acquisition of execution resources resulting from the support of CEO and CIO, and the selection of specification level of the user requirements. The results of this study will provide useful site information to the corporations that are considering adopting ISMP and IT service firms, and present meaningful suggestions on the future study directions to researchers in the field of IT service competitive advantages.

  • PDF

Development of a Window Program for Searching CpG Island (CpG Island 검색용 윈도우 프로그램 개발)

  • Kim, Ki-Bong
    • Journal of Life Science
    • /
    • v.18 no.8
    • /
    • pp.1132-1139
    • /
    • 2008
  • A CpG island is a short stretch of DNA in which the frequency of the CG dinucleotide is higher than other regions. CpG islands are present in the promoters and exonic regions of approximately $30{\sim}60$% of mammalian genes so they are useful markers for genes in organisms containing 5-methylcytosine in their genomes. Recent evidence supports the notion that the hypermethylation of CpG island, by silencing tumor suppressor genes, plays a major causal role in cancer, which has been described in almost every tumor types. In this respect, CpG island search by computational methods is very helpful for cancer research and computational promoter and gene predictions. I therefore developed a window program (called CpGi) on the basis of CpG island criteria defined by D. Takai and P. A. Jones. The program 'CpGi' was implemented in Visual C++ 6.0 and can determine the locations of CpG islands using diverse parameters (%GC, Obs (CpG)/Exp (CpG), window size, step size, gap value, # of CpG, length) specified by user. The analysis result of CpGi provides a graphical map of CpG islands and G+C% plot, where more detailed information on CpG island can be obtained through pop-up window. Two human contigs, i.e. AP00524 (from chromosome 22) and NT_029490.3 (from chromosome 21), were used to compare the performance of CpGi and two other public programs for the accuracy of search results. The two other programs used in the performance comparison are Emboss-CpGPlot and CpG Island Searcher that are web-based public CpG island search programs. The comparison result showed that CpGi is on a level with or outperforms Emboss-CpGPlot and CpG Island Searcher. Having a simple and easy-to-use user interface, CpGi would be a very useful tool for genome analysis and CpG island research. To obtain a copy of CpGi for academic use only, contact corresponding author.

The Application of Operations Research to Librarianship : Some Research Directions (운영연구(OR)의 도서관응용 -그 몇가지 잠재적응용분야에 대하여-)

  • Choi Sung Jin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.4
    • /
    • pp.43-71
    • /
    • 1975
  • Operations research has developed rapidly since its origins in World War II. Practitioners of O. R. have contributed to almost every aspect of government and business. More recently, a number of operations researchers have turned their attention to library and information systems, and the author believes that significant research has resulted. It is the purpose of this essay to introduce the library audience to some of these accomplishments, to present some of the author's hypotheses on the subject of library management to which he belives O. R. has great potential, and to suggest some future research directions. Some problem areas in librianship where O. R. may play a part have been discussed and are summarized below. (1) Library location. It is usually necessary to make balance between accessibility and cost In location problems. Many mathematical methods are available for identifying the optimal locations once the balance between these two criteria has been decided. The major difficulties lie in relating cost to size and in taking future change into account when discriminating possible solutions. (2) Planning new facilities. Standard approaches to using mathematical models for simple investment decisions are well established. If the problem is one of choosing the most economical way of achieving a certain objective, one may compare th althenatives by using one of the discounted cash flow techniques. In other situations it may be necessary to use of cost-benefit approach. (3) Allocating library resources. In order to allocate the resources to best advantage the librarian needs to know how the effectiveness of the services he offers depends on the way he puts his resources. The O. R. approach to the problems is to construct a model representing effectiveness as a mathematical function of levels of different inputs(e.g., numbers of people in different jobs, acquisitions of different types, physical resources). (4) Long term planning. Resource allocation problems are generally concerned with up to one and a half years ahead. The longer term certainly offers both greater freedom of action and greater uncertainty. Thus it is difficult to generalize about long term planning problems. In other fields, however, O. R. has made a significant contribution to long range planning and it is likely to have one to make in librarianship as well. (5) Public relations. It is generally accepted that actual and potential users are too ignorant both of the range of library services provided and of how to make use of them. How should services be brought to the attention of potential users? The answer seems to lie in obtaining empirical evidence by controlled experiments in which a group of libraries participated. (6) Acquisition policy. In comparing alternative policies for acquisition of materials one needs to know the implications of each service which depends on the stock. Second is the relative importance to be ascribed to each service for each class of user. By reducing the level of the first, formal models will allow the librarian to concentrate his attention upon the value judgements which will be necessary for the second. (7) Loan policy. The approach to choosing between loan policies is much the same as the previous approach. (8) Manpower planning. For large library systems one should consider constructing models which will permit the skills necessary in the future with predictions of the skills that will be available, so as to allow informed decisions. (9) Management information system for libraries. A great deal of data can be available in libraries as a by-product of all recording activities. It is particularly tempting when procedures are computerized to make summary statistics available as a management information system. The values of information to particular decisions that may have to be taken future is best assessed in terms of a model of the relevant problem. (10) Management gaming. One of the most common uses of a management game is as a means of developing staff's to take decisions. The value of such exercises depends upon the validity of the computerized model. If the model were sufficiently simple to take the form of a mathematical equation, decision-makers would probably able to learn adequately from a graph. More complex situations require simulation models. (11) Diagnostics tools. Libraries are sufficiently complex systems that it would be useful to have available simple means of telling whether performance could be regarded as satisfactory which, if it could not, would also provide pointers to what was wrong. (12) Data banks. It would appear to be worth considering establishing a bank for certain types of data. It certain items on questionnaires were to take a standard form, a greater pool of data would de available for various analysis. (13) Effectiveness measures. The meaning of a library performance measure is not readily interpreted. Each measure must itself be assessed in relation to the corresponding measures for earlier periods of time and a standard measure that may be a corresponding measure in another library, the 'norm', the 'best practice', or user expectations.

  • PDF

Value of Information Technology Outsourcing: An Empirical Analysis of Korean Industries (IT 아웃소싱의 가치에 관한 연구: 한국 산업에 대한 실증분석)

  • Han, Kun-Soo;Lee, Kang-Bae
    • Asia pacific journal of information systems
    • /
    • v.20 no.3
    • /
    • pp.115-137
    • /
    • 2010
  • Information technology (IT) outsourcing, the use of a third-party vendor to provide IT services, started in the late 1980s and early 1990s in Korea, and has increased rapidly since 2000. Recently, firms have increased their efforts to capture greater value from IT outsourcing. To date, there have been a large number of studies on IT outsourcing. Most prior studies on IT outsourcing have focused on outsourcing practices and decisions, and little attention has been paid to objectively measuring the value of IT outsourcing. In addition, studies that examined the performance of IT outsourcing have mainly relied on anecdotal evidence or practitioners' perceptions. Our study examines the contribution of IT outsourcing to economic growth in Korean industries over the 1990 to 2007 period, using a production function framework and a panel data set for 54 industries constructed from input-output tables, fixed-capital formation tables, and employment tables. Based on the framework and estimation procedures that Han, Kauffman and Nault (2010) used to examine the economic impact of IT outsourcing in U.S. industries, we evaluate the impact of IT outsourcing on output and productivity in Korean industries. Because IT outsourcing started to grow at a significantly more rapid pace in 2000, we compare the impact of IT outsourcing in pre- and post-2000 periods. Our industry-level panel data cover a large proportion of Korean economy-54 out of 58 Korean industries. This allows us greater opportunity to assess the impacts of IT outsourcing on objective performance measures, such as output and productivity. Using IT outsourcing and IT capital as our primary independent variables, we employ an extended Cobb-Douglas production function in which both variables are treated as factor inputs. We also derive and estimate a labor productivity equation to assess the impact of our IT variables on labor productivity. We use data from seven years (1990, 1993, 2000, 2003, 2005, 2006, and 2007) for which both input-output tables and fixed-capital formation tables are available. Combining the input-output tables and fixed-capital formation tables resulted in 54 industries. IT outsourcing is measured as the value of computer-related services purchased by each industry in a given year. All the variables have been converted to 2000 Korean Won using GDP deflators. To calculate labor hours, we use the average work hours for each sector provided by the OECD. To effectively control for heteroskedasticity and autocorrelation present in our dataset, we use the feasible generalized least squares (FGLS) procedures. Because the AR1 process may be industry-specific (i.e., panel-specific), we consider both common AR1 and panel-specific AR1 (PSAR1) processes in our estimations. We also include year dummies to control for year-specific effects common across industries, and sector dummies (as defined in the GDP deflator) to control for time-invariant sector-specific effects. Based on the full sample of 378 observations, we find that a 1% increase in IT outsourcing is associated with a 0.012~0.014% increase in gross output and a 1% increase in IT capital is associated with a 0.024~0.027% increase in gross output. To compare the contribution of IT outsourcing relative to that of IT capital, we examined gross marginal product (GMP). The average GMP of IT outsourcing was 6.423, which is substantially greater than that of IT capital at 2.093. This indicates that on average if an industry invests KRW 1 millon, it can increase its output by KRW 6.4 million. In terms of the contribution to labor productivity, we find that a 1% increase in IT outsourcing is associated with a 0.009~0.01% increase in labor productivity while a 1% increase in IT capital is associated with a 0.024~0.025% increase in labor productivity. Overall, our results indicate that IT outsourcing has made positive and economically meaningful contributions to output and productivity in Korean industries over the 1990 to 2007 period. The average GMP of IT outsourcing we report about Korean industries is 1.44 times greater than that in U.S. industries reported in Han et al. (2010). Further, we find that the contribution of IT outsourcing has been significantly greater in the 2000~2007 period during which the growth of IT outsourcing accelerated. Our study provides implication for policymakers and managers. First, our results suggest that Korean industries can capture further benefits by increasing investments in IT outsourcing. Second, our analyses and results provide a basis for managers to assess the impact of investments in IT outsourcing and IT capital in an objective and quantitative manner. Building on our study, future research should examine the impact of IT outsourcing at a more detailed industry level and the firm level.

RELATIONSHIP BETWEEN TEST-ANXIETY, DEPRESSION, TRAIT ANXIETY AND STATE ANXIETY (시험불안과 우울, 특성불안 및 상태불안과의 상호관계에 관한 연구)

  • Jung, Yeoung;Hong, Kang-E;Shin, Min-Sup;Seong, Yeong-Hoon;Cho, Soo-Churl
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.12 no.2
    • /
    • pp.225-236
    • /
    • 2001
  • Introduction:Test anxiety is a pervasive problem among high school students in Korea. While anxiety in test situations may actually facilitate the performance of some students, more often it is disruptive and leads to performance decrements. Over the past years, many child psychiatrists have become concerned with understanding the nature of test anxiety, but it is not clearly understood yet. In order to understand the nature of test anxiety, the relationship between test anxiety and depression, state anxiety, trait anxiety was examined. In addition, the relationships between the subscores of test anxiety (worry and emotionality) and the subscores of CDI, state anxiety or trait anxiety were examined. Methods:The Test Anxiety Inventory, Chidlren's Depression Inventory, State-Trait Anxiety Inventory were administered to 425 high school students in Seoul. The relationships between test anxiety and other measures were tested using Pearson correlation coefficients and to test the causal ralationship among the variables, regression analysis was performed. Results:The correlation coefficients between test anxiety and depression, state anxiety, trait anxiety were 0.56(p<0.05), 0.75(p<0.05), 0.53(p<0.05) respectively. The correlation coefficients between subscales of test anxiety and depression were all significant. The correlation between subscales of test anxiety and state and trait anxiety were also statistically significant. Conclusions:This study indicates that test anxiety is closely related with depression, state and trait anxiety. In addition, the subscales of test anxiety are significantly related with those of the depression. The correlation coefficients between test anxiety and state-trait anxiety are also statistically significant. Thus, in order to develop the preventive and effective methods for treatment, these psychopathological characteristics should be kept in mind.

  • PDF

The Validity and Reliability of 'Computerized Neurocognitive Function Test' in the Elementary School Child (학령기 정상아동에서 '전산화 신경인지기능검사'의 타당도 및 신뢰도 분석)

  • Lee, Jong-Bum;Kim, Jin-Sung;Seo, Wan-Seok;Shin, Hyoun-Jin;Bai, Dai-Seg;Lee, Hye-Lin
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.11 no.2
    • /
    • pp.97-117
    • /
    • 2003
  • Objective: This study is to examine the validity and reliability of Computerized Neurocognitive Function Test among normal children in elementary school. Methods: K-ABC, K-PIC, and Computerized Neurocognitive Function Test were performed to the 120 body of normal children(10 of each male and female) from June, 2002 to January, 2003. Those children had over the average of intelligence and passed the rule out criteria. To verify test-retest reliability for those 30 children who were randomly selected, Computerized Neurocognitive Function Test was carried out again 4 weeks later. Results: As a results of correlation analysis for validity test, four of continues performance tests matched with those on adults. In the memory tests, results presented the same as previous research with a difference between forward test and backward test in short-term memory. In higher cognitive function tests, tests were consist of those with different purpose respectively. After performing factor analysis on 43 variables out of 12 tests, 10 factors were raised and the total percent of variance was 75.5%. The reasons were such as: 'sustained attention, information processing speed, vigilance, verbal learning, allocation of attention and concept formation, flexibility, concept formation, visual learning, short-term memory, and selective attention' in order. In correlation with K-ABC to prepare explanatory criteria, selectively significant correlation(p<.0.5-001) was found in subscale of K-ABC. In the test-retest reliability test, the results reflecting practice effect were found and prominent especially in higher cognitive function tests. However, split-half reliability(r=0.548-0.7726, p<.05) and internal consistency(0.628-0.878, p<.05) of each examined group were significantly high. Conclusion: The performance of Computerized Neurocognitive Function Test in normal children represented differ developmental character than that in adult. And basal information for preparing the explanatory criteria could be acquired by searching for the relation with standardized intelligence test which contains neuropsycological background.

  • PDF

Study on Comparison of Growth Performance, Feed Efficiency and Carcass Characteristics for Holstein and F1(Holstein ♀ x Hanwoo ♂) Steers and Heifers (Holstein과 교잡종 거세우 및 처녀우의 성장발육, 사료이용성 및 도체특성 비교 연구)

  • Kang, S.W.;Oh, Y.K.;Kim, K.H.;Choi, C.W.;Son, Y.S.
    • Journal of Animal Science and Technology
    • /
    • v.47 no.4
    • /
    • pp.593-606
    • /
    • 2005
  • Present study was conducted to investigate the optimal feeding levels for producing the high quality meat on the basis of the information deriving from the comparison of the growth performance and carcass characteristics among breeds(Holstein vs F1, Holstein♀×Hanwoo♂), sex(steer vs heifer) and interaction between breed and sex. Thirty two animals on 4 treatments(i.e. eight head each) were used for 540 days from seven to 24 months of age. The results obtained are summarized as follows; the range of average daily gains was 0.733 to 1.018, 0.994 to 1.255, 0.947 to 1.259 and 0.736 to 0.824kg for the growing, the early-fattening, the mid- fattening and the finishing periods, respectively. The range of average daily gains for the entire period was 0.882 to 1.061kg. The gains were higher for Holstein(7.3%) and the steers(10.5%) than F1 and the heifers, respectively. Concentrates and total digestible nutrients intakes per kg gain were higher for Holstein and the heifers than F1 and the steers, respectively. These findings may indicate that feed utilization is higher for Holstein than F1, and higher for the steers than the heifers. In carcass characteristics, back fat thickness was thicker for Holstein than F1, and rib-eye area was smaller for Holstein than F1. The rib-eye area per kg carcass weight was larger for F1 and the heifers than Holstein and the steers, respectively. Meat color was better for Holstein than F1, but the sex distinction did not show any differences. In physicochemical properties of longissimus dorsi, shear force, cooking loss, water holding capacity and the panel test scores of juiciness, tenderness and flavor for F1 and the heifers were better than those for Holstein and the steers, respectively. According to the above results, we may conclude that F1 and heifers rather than Holstein and steers are recommended for high quality meat production. In steers and heifers of Holstein and F1, the optimal feeding levels may be 1.9% of apparent body weight for concentrates and 25% of concentrates intake for rice straw.

A Mutual P3P Methodology for Privacy Preserving Context-Aware Systems Development (프라이버시 보호 상황인식 시스템 개발을 위한 쌍방향 P3P 방법론)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.145-162
    • /
    • 2008
  • One of the big concerns in e-society is privacy issue. In special, in developing robust ubiquitous smart space and corresponding services, user profile and preference are collected by the service providers. Privacy issue would be more critical in context-aware services simply because most of the context data themselves are private information: user's current location, current schedule, friends nearby and even her/his health data. To realize the potential of ubiquitous smart space, the systems embedded in the space should corporate personal privacy preferences. When the users invoke a set of services, they are asked to allow the service providers or smart space to make use of personal information which is related to privacy concerns. For this reason, the users unhappily provide the personal information or even deny to get served. On the other side, service provider needs personal information as rich as possible with minimal personal information to discern royal and trustworthy customers and those who are not. It would be desirable to enlarge the allowable personal information complying with the service provider's request, whereas minimizing service provider's requiring personal information which is not allowed to be submitted and user's submitting information which is of no value to the service provider. In special, if any personal information required by the service provider is not allowed, service will not be provided to the user. P3P (Platform for Privacy Preferences) has been regarded as one of the promising alternatives to preserve the personal information in the course of electronic transactions. However, P3P mainly focuses on preserving the buyers' personal information. From time to time, the service provider's business data should be protected from the unintended usage from the buyers. Moreover, even though the user's privacy preference could depend on the context happened to the user, legacy P3P does not handle the contextual change of privacy preferences. Hence, the purpose of this paper is to propose a mutual P3P-based negotiation mechanism. To do so, service provider's privacy concern is considered as well as the users'. User's privacy policy on the service provider's information also should be informed to the service providers before the service begins. Second, privacy policy is contextually designed according to the user's current context because the nomadic user's privacy concern structure may be altered contextually. Hence, the methodology includes mutual privacy policy and personalization. Overall framework of the mechanism and new code of ethics is described in section 2. Pervasive platform for mutual P3P considers user type and context field, which involves current activity, location, social context, objects nearby and physical environments. Our mutual P3P includes the privacy preference not only for the buyers but also the sellers, that is, service providers. Negotiation methodology for mutual P3P is proposed in section 3. Based on the fact that privacy concern occurs when there are needs for information access and at the same time those for information hiding. Our mechanism was implemented based on an actual shopping mall to increase the feasibility of the idea proposed in this paper. A shopping service is assumed as a context-aware service, and data groups for the service are enumerated. The privacy policy for each data group is represented as APPEL format. To examine the performance of the example service, in section 4, simulation approach is adopted in this paper. For the simulation, five data elements are considered: $\cdot$ UserID $\cdot$ User preference $\cdot$ Phone number $\cdot$ Home address $\cdot$ Product information $\cdot$ Service profile. For the negotiation, reputation is selected as a strategic value. Then the following cases are compared: $\cdot$ Legacy P3P is considered $\cdot$ Mutual P3P is considered without strategic value $\cdot$ Mutual P3P is considered with strategic value. The simulation results show that mutual P3P outperforms legacy P3P. Moreover, we could conclude that when mutual P3P is considered with strategic value, performance was better than that of mutual P3P is considered without strategic value in terms of service safety.