• Title/Summary/Keyword: Potential evaluation

Search Result 3,488, Processing Time 0.041 seconds

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Evaluation of Multiple System Atrophy and Early Parkinson's Disease Using $^{123)I$-FP-CIT SPECT ($^{123)I$-FP-CIT SPECT를 이용한 다중계위축증 및 조기 파킨슨병에서의 평가)

  • Oh, So-Won;Kim, Yu-Kyeong;Lee, Byung-Chul;Kim, Bom-Sahn;Kim, Ji-Sun;Kim, Jong-Min;Kim, Sang-Eun
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.1
    • /
    • pp.10-18
    • /
    • 2009
  • Purpose: We investigated quantification of dopaminergic transporter (DAT) and serotonergic transporter (SERT) on $^{123}I$-FP-CIT SPECT for differentiating between multiple systemic atrophy (MSA) and idiopathic Parkinson's disease (IPD). Materials and Methods: N-fluoropropyl-$2{\beta}$-carbomethoxy-$3{\beta}$-4-[$^{123}I$]-iodophenylnortropane SPECT ($^{123}I$-FP-CIT SPECT) was performed in 8 patients with MSA (mean age: $64.0{\pm}4.5yrs$, m:f=6:2), 13 with early IPD (mean age: $65.5{\pm}5.3yrs$, m:f=9:4), and 12 healthy controls (mean age: $63.3{\pm}5.7yrs$, m:f=8:4). Standard regions of interests (ROls) of striatum to evaluate DAT, and hypothalamus and midbrain for SERT were drawn on standard template images and applied to each image taken 4 hours after radiotracer injection. Striatal specific binding for DAT and hypothalamic and midbrain specific binding for SERT were calculated using region/reference ratio based on the transient equilibrium method. Group differences were tested using ANOVA with the postHoc analysis. Results: DAT in the whole striatum and striatal subregions were significantly decreased in both patient groups with MSA and early IPD, compared with healthy control (p<0.05 in all). In early IPD, a significant increase in the uptake ratio in anterior and posterior putamen and a trend of increase in caudate to putamen ratio was observed. In MSA, the decrease of DAT was accompanied with no difference in the striatal uptake pattern compared with healthy controls. Regarding the brain regions where $^{123}I$-FP-CIT binding was predominant by SERT, MSA patients showed a decrease in the binding of $^{123}I$-FP-CIT in the pons compared with controls as well as early IPD patients (MSA: $0.22{\pm}0.1$ healthy controls: $0.33{\pm}0.19$, IPD: $0.29{\pm}0.19$), however, it did not reach the statistical significance. Conclusion: In this study, the differential patterns in the reduction of DAT in the striatum and the reduction of pontine $^{123}I$-FP-CIT binding predominant by SERT could be observed in MSA patients on $^{123}I$-FP-CIT SPECT. We suggest that the quantification of SERT as well as DAT using $^{123}I$-FP-CIT SPECT is helpful to differentiate parkinsonian disorders in early stage.

The Effect of Recombinant Human Epidermal Growth Factor on Cisplatin and Radiotherapy Induced Oral Mucositis in Mice (마우스에서 Cisplatin과 방사선조사로 유발된 구내염에 대한 재조합 표피성장인자의 효과)

  • Na, Jae-Boem;Kim, Hye-Jung;Chai, Gyu-Young;Lee, Sang-Wook;Lee, Kang-Kyoo;Chang, Ki-Churl;Choi, Byung-Ock;Jang, Hong-Seok;Jeong, Bea-Keon;Kang, Ki-Mun
    • Radiation Oncology Journal
    • /
    • v.25 no.4
    • /
    • pp.242-248
    • /
    • 2007
  • Purpose: To study the effect of recombinant human epidermal growth factor (rhEGF) on oral mucositis induced by cisplatin and radiotherapy in a mouse model. Materials and Methods: Twenty-four ICR mice were divided into three groups-the normal control group, the no rhEGF group (treatment with cisplatin and radiation) and the rhEGF group (treatment with cisplatin, radiation and rhEGF). A model of mucositis induced by cisplatin and radiotherapy was established by injecting mice with cisplatin (10 mg/kg) on day 1 and with radiation exposure (5 Gy/day) to the head and neck on days $1{\sim}5$. rhEGF was administered subcutaneously on days -1 to 0 (1 mg/kg/day) and on days 3 to 5 (1 mg/kg/day). Evaluation included body weight, oral intake, and histology. Results: For the comparison of the change of body weight between the rhEGF group and the no rhEGF group, a statistically significant difference was observed in the rhEGF group for the 5 days after day 3 of. the experiment. The rhEGF group and no rhEGF group had reduced food intake until day 5 of the experiment, and then the mice demonstrated increased food intake after day 13 of the of experiment. When the histological examination was conducted on day 7 after treatment with cisplatin and radiation, the rhEGF group showed a focal cellular reaction in the epidermal layer of the mucosa, while the no rhEGF group did not show inflammation of the oral mucosa. Conclusion: These findings suggest that rhEGF has a potential to reduce the oral mucositis burden in mice after treatment with cisplatin and radiation. The optimal dose, number and timing of the administration of rhEGF require further investigation.

Electronic Word-of-Mouth in B2C Virtual Communities: An Empirical Study from CTrip.com (B2C허의사구중적전자구비(B2C虚拟社区中的电子口碑): 관우휴정려유망적실증연구(关于携程旅游网的实证研究))

  • Li, Guoxin;Elliot, Statia;Choi, Chris
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.3
    • /
    • pp.262-268
    • /
    • 2010
  • Virtual communities (VCs) have developed rapidly, with more and more people participating in them to exchange information and opinions. A virtual community is a group of people who may or may not meet one another face to face, and who exchange words and ideas through the mediation of computer bulletin boards and networks. A business-to-consumer virtual community (B2CVC) is a commercial group that creates a trustworthy environment intended to motivate consumers to be more willing to buy from an online store. B2CVCs create a social atmosphere through information contribution such as recommendations, reviews, and ratings of buyers and sellers. Although the importance of B2CVCs has been recognized, few studies have been conducted to examine members' word-of-mouth behavior within these communities. This study proposes a model of involvement, statistics, trust, "stickiness," and word-of-mouth in a B2CVC and explores the relationships among these elements based on empirical data. The objectives are threefold: (i) to empirically test a B2CVC model that integrates measures of beliefs, attitudes, and behaviors; (ii) to better understand the nature of these relationships, specifically through word-of-mouth as a measure of revenue generation; and (iii) to better understand the role of stickiness of B2CVC in CRM marketing. The model incorporates three key elements concerning community members: (i) their beliefs, measured in terms of their involvement assessment; (ii) their attitudes, measured in terms of their satisfaction and trust; and, (iii) their behavior, measured in terms of site stickiness and their word-of-mouth. Involvement is considered the motivation for consumers to participate in a virtual community. For B2CVC members, information searching and posting have been proposed as the main purpose for their involvement. Satisfaction has been reviewed as an important indicator of a member's overall community evaluation, and conceptualized by different levels of member interactions with their VC. The formation and expansion of a VC depends on the willingness of members to share information and services. Researchers have found that trust is a core component facilitating the anonymous interaction in VCs and e-commerce, and therefore trust-building in VCs has been a common research topic. It is clear that the success of a B2CVC depends on the stickiness of its members to enhance purchasing potential. Opinions communicated and information exchanged between members may represent a type of written word-of-mouth. Therefore, word-of-mouth is one of the primary factors driving the diffusion of B2CVCs across the Internet. Figure 1 presents the research model and hypotheses. The model was tested through the implementation of an online survey of CTrip Travel VC members. A total of 243 collected questionnaires was reduced to 204 usable questionnaires through an empirical process of data cleaning. The study's hypotheses examined the extent to which involvement, satisfaction, and trust influence B2CVC stickiness and members' word-of-mouth. Structural Equation Modeling tested the hypotheses in the analysis, and the structural model fit indices were within accepted thresholds: ${\chi}^2^$/df was 2.76, NFI was .904, IFI was .931, CFI was .930, and RMSEA was .017. Results indicated that involvement has a significant influence on satisfaction (p<0.001, ${\beta}$=0.809). The proportion of variance in satisfaction explained by members' involvement was over half (adjusted $R^2$=0.654), reflecting a strong association. The effect of involvement on trust was also statistically significant (p<0.001, ${\beta}$=0.751), with 57 percent of the variance in trust explained by involvement (adjusted $R^2$=0.563). When the construct "stickiness" was treated as a dependent variable, the proportion of variance explained by the variables of trust and satisfaction was relatively low (adjusted $R^2$=0.331). Satisfaction did have a significant influence on stickiness, with ${\beta}$=0.514. However, unexpectedly, the influence of trust was not even significant (p=0.231, t=1.197), rejecting that proposed hypothesis. The importance of stickiness in the model was more significant because of its effect on e-WOM with ${\beta}$=0.920 (p<0.001). Here, the measures of Stickiness explain over eighty of the variance in e-WOM (Adjusted $R^2$=0.846). Overall, the results of the study supported the hypothesized relationships between members' involvement in a B2CVC and their satisfaction with and trust of it. However, trust, as a traditional measure in behavioral models, has no significant influence on stickiness in the B2CVC environment. This study contributes to the growing body of literature on B2CVCs, specifically addressing gaps in the academic research by integrating measures of beliefs, attitudes, and behaviors in one model. The results provide additional insights to behavioral factors in a B2CVC environment, helping to sort out relationships between traditional measures and relatively new measures. For practitioners, the identification of factors, such as member involvement, that strongly influence B2CVC member satisfaction can help focus technological resources in key areas. Global e-marketers can develop marketing strategies directly targeting B2CVC members. In the global tourism business, they can target Chinese members of a B2CVC by providing special discounts for active community members or developing early adopter programs to encourage stickiness in the community. Future studies are called for, and more sophisticated modeling, to expand the measurement of B2CVC member behavior and to conduct experiments across industries, communities, and cultures.

DC Resistivity method to image the underground structure beneath river or lake bottom (하저 지반특성 규명을 위한 전기비저항 탐사)

  • Kim Jung-Ho;Yi Myeong-Jong;Song Yoonho;Cho Seong-Jun;Lee Seong-Kon;Son Jeongsul
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2002.09a
    • /
    • pp.139-162
    • /
    • 2002
  • Since weak zones or geological lineaments are likely to be eroded, weak zones may develop beneath rivers, and a careful evaluation of ground condition is important to construct structures passing through a river. Dc resistivity surveys, however, have seldomly applied to the investigation of water-covered area, possibly because of difficulties in data aquisition and interpretation. The data aquisition having high quality may be the most important factor, and is more difficult than that in land survey, due to the water layer overlying the underground structure to be imaged. Through the numerical modeling and the analysis of case histories, we studied the method of resistivity survey at the water-covered area, starting from the characteristics of measured data, via data acquisition method, to the interpretation method. We unfolded our discussion according to the installed locations of electrodes, ie., floating them on the water surface, and installing at the water bottom, since the methods of data acquisition and interpretation vary depending on the electrode location. Through this study, we could confirm that the dc resistivity method can provide the fairly reasonable subsurface images. It was also shown that installing electrodes at the water bottom can give the subsurface image with much higher resolution than floating them on the water surface. Since the data acquired at the water-covered area have much lower sensitivity to the underground structure than those at the land, and can be contaminated by the higher noise, such as streaming potential, it would be very important to select the acquisition method and electrode array being able to provide the higher signal-to-noise ratio data as well as the high resolving power. The method installing electrodes at the water bottom is suitable to the detailed survey because of much higher resolving power, whereas the method floating them, especially streamer dc resistivity survey, is to the reconnaissance survey owing of very high speed of field work.

  • PDF

Evaluation of Radiation Exposure to Nurse on Nuclear Medicine Examination by Use Radioisotope (방사성 동위원소를 이용한 핵의학과 검사에서 병동 간호사의 방사선 피폭선량 평가)

  • Jeong, Jae Hoon;Lee, Chung Wun;You, Yeon Wook;Seo, Yeong Deok;Choi, Ho Yong;Kim, Yun Cheol;Kim, Yong Geun;Won, Woo Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.44-49
    • /
    • 2017
  • Purpose Radiation exposure management has been strictly regulated for the radiation workers, but there are only a few studies on potential risk of radiation exposure to non-radiation workers, especially nurses in a general ward. The present study aimed to estimate the exact total exposure of the nurse in a general ward by close contact with the patient undergoing nuclear medicine examinations. Materials and Methods Radiation exposure rate was determined by using thermoluminescent dosimeter (TLD) and optical simulated luminescence (OSL) in 14 nurses in a general ward from October 2015 to June 2016. External radiation rate was measured immediately after injection and examination at skin surface, and 50 cm and 1 m distance from 50 patients (PET/CT 20 pts; Bone scan 20 pts; Myocardial SPECT 10 pts). After measurement, effective half-life, and total radiation exposure expected in nurses were calculated. Then, expected total exposure was compared with total exposures actually measured in nurses by TLD and OSL. Results Mean and maximum amount of radiation exposure of 14 nurses in a general ward were 0.01 and 0.02 mSv, respectively in each measuring period. External radiation rate after injection at skin surface, 0.5 m and 1 m distance from patients was as following; $376.0{\pm}25.2$, $88.1{\pm}8.2$ and $29.0{\pm}5.8{\mu}Sv/hr$, respectively in PET/CT; $206.7{\pm}56.6$, $23.1{\pm}4.4$ and $10.1{\pm}1.4{\mu}Sv/hr$, respectively in bone scan; $22.5{\pm}2.6$, $2.4{\pm}0.7$ and $0.9{\pm}0.2{\mu}Sv/hr$, respectively in myocardial SPECT. After examination, external radiation rate at skin surface, 0.5 m and 1 m distance from patients was decreased as following; $165.3{\pm}22.1$, $38.7{\pm}5.9$ and $12.4{\pm}2.5{\mu}Sv/hr$, respectively in PET/CT; $32.1{\pm}8.7$, $6.2{\pm}1.1$, $2.8{\pm}0.6$, respectively in bone scan; $14.0{\pm}1.2$, $2.1{\pm}0.3$, $0.8{\pm}0.2{\mu}Sv/hr$, respectively in myocardial SPECT. Based upon the results, an effective half-life was calculated, and at 30 minutes after examination the time to reach normal dose limit in 'Nuclear Safety Act' was calculated conservatively without considering a half-life. In oder of distance (at skin surface, 0.5 m and 1 m distance from patients), it was 7.9, 34.1 and 106.8 hr, respectively in PET/CT; 40.4, 199.5 and 451.1 hr, respectively in bone scan, 62.5, 519.3 and 1313.6 hr, respectively in myocardial SPECT. Conclusion Radiation exposure rate may differ slightly depending on the work process and the environment in a general ward. Exposure rate was measured at step in the general examination procedure and it made our results more reliable. Our results clearly showed that total amount of radiation exposure caused by residual radioactive isotope in the patient body was neglectable, even comparing with the natural radiation exposure. In conclusion, nurses in a general ward were much less exposed than the normal dose limit, and the effects of exposure by contacting patients undergoing nuclear medicine examination was ignorable.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.