• Title/Summary/Keyword: 3D to 2D development

Search Result 4,621, Processing Time 0.042 seconds

Blue Carbon Resources in the East Sea of Korea and Their Values and Potential Applications (동해안 블루카본 자원의 가치와 활용방안)

  • Yoon, Ho-Sung;Do, Jeong-Mi;Jeon, Byung Hee;Yeo, Hee-Tae;Jang, Hyeong Seok;Yang, Hee Wook;Suh, Ho Seong;Hong, Ji Won
    • Journal of Life Science
    • /
    • v.32 no.7
    • /
    • pp.578-587
    • /
    • 2022
  • Korea, as the world's 7th largest emitter of greenhouse gases, has raised the national greenhouse gas reduction target as international regulations have been strengthened. As it is possible to utilize coastal and marine ecosystems as important nature-based solutions (NbS) for implementing climate change mitigation or adaptation plans, the blue carbon ecosystem is now receiving attention. Blue carbon refers to carbon that is deposited and stored for a long period after carbon dioxide (CO2) is absorbed as biomass by coastal ecosystems or oceanic ecosystems through photosynthesis. Currently, there are only three blue carbon ecosystems officially recognized by the Intergovernmental Panel on Climate Change (IPCC): mangroves, salt marshes, and seagrasses. However, the results of new research on the high CO2 sequestration and storage capacity of various new blue carbon sinks, such as seaweeds, microalgae, coral reefs, and non-vegetated tidal flats, have been continuously reported to the academic community recently. The possibility of IPCC international accreditation is gradually increasing through scientific verification related to calculations. In this review, the current status and potential value of seaweeds, seagrass fields, and non-vegetated tidal flats, which are sources of blue carbon on the east coast, are discussed. This paper confirms that seaweed resources are the most effective NbS in the East Sea of Korea. In addition, we would like to suggest the direction of research and development (R&D) and utilization so that new blue carbon sinks can obtain international IPCC certification in the near future.

Deviations of Implant Position between Pre- and Post-operation in Computer-guided Template-based Implant Placement (Computer-guided template를 이용한 임플란트 식립에서 술 전과 술 후 사이의 임플란트 위치에 따른 변위량 검사)

  • Kim, Won;Kim, Seung-Mi;Kim, Hyo-Jung;Song, Eun-Young;Lee, Si-Ho;Oh, Nam-Sik
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.27 no.2
    • /
    • pp.175-184
    • /
    • 2011
  • With a development of implant restoration technique, there are increasing use of computer-guided system for edentulous patients. It was carried out simulated operation based on CT information about patient's bone quantity, quality and anatomical landmark. However, there are some difference between the programmed implant and post-operative implant about it's position. If the deviation was severe, it could happen a failure of 'passive fit' and not suited for path of implant restoration. The aim of this presentation is to evaluate about a degree of deviations between programmed implant and post-operative implant. Five patients treated by 'NobelGuide' system (Nobel Biocare AB, G$\ddot{o}$teborg, Sweden) in Department of Prosthodontics, Inha University were included in this study. The patients were performed CT radiograph taking and intra-oral impression taking at pre-operation. Based on CT images and study model, surgical stent was produced by NobelBiocareTM. To fabricated a pre-operative study model, after connected lab analog to surgical template, accomplished a pre-operative model using type 4 dental stone. At final impression, a post-operative study model was fabricated in the conventional procedures. Each study model was performed CT radiograph taking. Based on CT images, each implant was simulated in three dimensional position using $Procera^{(R)}$ software (Procera Software Clinical Design Premium, version 1.5; Nobel Biocare AB). In 3D simulated model, length and angulation between each implant of both pre- and post-operative implants were measured and recorded about linear and angular deviation between pre-and post-operative implants. A total of 24 implants were included in this study and 58 inter-implant sites between each implant were measured about linear and angular deviations. In the linear deviation a mean deviation of 0.41 mm (range 0~1.7 mm) was reported. In the angular deviation, a mean deviation was $1.99^{\circ}$ (range $0^{\circ}{\sim}6.7^{\circ}$). It appears that the both linear and angular mean deviation value were well acceptable to application of computer-guided implant system.

The study for the roles of intratracheally administered histamine in the neutrophil-mediated acute lung injury in rats: (호중구를 매개하는 백서의 급성 폐손상의 병리가전에 있어 기도내로 투여한 히스타민의 역활에 관하여)

  • Koh, Youn-Suck;Hybertson, Brooks M.;Jepson, Eric K.;Kim, Mi-Jung;Lee, In-Chul;Lim, Chae-Man;Lee, Sang-Do;Kim, Dong-Soon;Kim, Won-Dong;Repine, John E.
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.3
    • /
    • pp.308-322
    • /
    • 1996
  • Background : Neutrophils are considered to play critical roles in the development of acute respiratory distress syndrome. Histamine, which is distributed abundantly in lung tissue, increases the rolling of neutrophills via increase of P-selectin expression on the surface of endothelial cells and is known to have some interrelationships with IL-1, IL-8 and TNF-$\alpha$. We studied to investigate the effect of the histamine on the acute lung injury of the rats induced by intratracheal insufflation of TNF-$\alpha$ which has less potency to cause lung injury compared to IL-1 in rats. Methods : We intratracheally instilled saline or TNF(R&D, 500ng), IL-1(R&D, 50ng)or histamine of varius dose(1.1, 11 and $55\;{\mu}g/kg$) with and without TNF separately in Sprague-Dawley rats weighing 270-370 grams. We also intratracheally treated IL-1(50ng) along with histamine($55\;{\mu}g/kg$). In cases, there were synergistic effects induced by histamine on the parameters of TNF-induced acute lung injury, antihistamines(Sigma, mepyramine as a $H_1$ receptor blockade and ranitidine as a $H_2$ receptor blockade, 10 mg/kg in each)were co-administered intravenously to the rats treated TNF along with histamine($1.1\;{\mu}g/kg$) intratraeheally. Then after 5 h we measured lung lavage neutrophil numbers, lavage cytokine-induced neutrophil chemoattractants(CINC), lung myeloperoxidase activity(MPO) and lung leak. We also intratracheally insufflated TNF with/without histamine($11\;{\mu}g/kg$), then after 24 h measured lung leak in rats. Statistical analyses were done by Kruskal-Wallis nonparametric ANOVA test with Dunn's multiple comparison test or by Mann-Whitney U test. Results : We found that rats given TNF, histamine alone(11 and $55\;{\mu}g/kg$), and TNF with histamine(l.1, 11, and $55\;{\mu}g/kg$) intratracheally had increased (p<0.05) lung MPO activity compared with saline-treated control rats. TNF with histamine $11\;{\mu}g/kg$ had increased MPO activity (P=0.0251) compared with TNF-treated rats. TNF and TNF with histamine(1.1, 11, and $55\;{\mu}g/kg$) intratracheally had all increased (p<0.05) lung leak, lavage neutophil numbers and lavage CINC activities compared with saline. TNF with histamine $1.1\;{\mu}g/kg$ had increased (p=0.0367) lavage neutrophil numbers compared with TNF treated rats. But there were no additive effect of histamine with TNF compared with TNF alone in acute lung leak on 5 h and 24 h in rats. Treatment of rats with the $H_1$ and $H_2$ antagonists resulted in inhibitions of lavage neutrophil accumulations and lavage CINC activity elevations elicited by co-treated histamine in TNF-induced acute lung injury intratracheally in rats. We also found that rats given IL-1 along with histamine intratracheally did not have increase in lung leak compared with IL-1 treated rats. Conclusion : Histamine administered intratracheally did not have synergistic effects on TNF-induced acute lung leak inspite of additive effects on increase in MPO activity and lavage neutrophil numbers in rats. These observations suggest that instilling histamine intratracheally would not play synergistic roles in neutrophil-mediated acute lung injury in rats.

  • PDF

A Clinical Study on the Hypercalcemia in Primary Bronchogenic Carcinoma (고칼슘혈증을 동반한 원발성 폐암의 임상적 특징)

  • Park, Hye-Jung;Shin, Kyeong-Cheol;Moon, Young-Chul;Chung, Jin-Hong;Lee, Kwan-Ho;Sung, Cha-Kyung;Lee, Hyun-Woo
    • Journal of Yeungnam Medical Science
    • /
    • v.16 no.2
    • /
    • pp.208-218
    • /
    • 1999
  • Background: Lung cancer-associated hypercalcemia is one of the most disabling and life-threatening paraneoplastic disorders. Humoral hypercalcemia is responsible for most lung cancer-associated hypercalcemia. Patients with hypercalcemia are usually in the advanced stage with obvious bulky tumor and carry a poor prognosis. Materials and Methods: Total 29 patients satisfied the following criteria: histologically proven primary lung cancer, corrected calcium level ${\geq}$ 10.5 mg/dL, and symptoms which could possibly be attributed to hypercalcemia. In this retrospective study, we evaluated the various clinical aspects of hypercalcemia, in relation to cancer stage, histologic cell type, mass size, bone metastasis, performance status, and other possible characteristics. Results: Total 29 lung cancer patients with hypercalcemia were studied, and most of them had squamous cell carcinoma in their histologic finding. The incidence of hypercalcemia was significantly higher between 50 and 69 years of age, and in the advancement of cancer stage. Although serum calcium level showed positive correlation with mass size, performance status, and bone metastasis, it was not significant statistically. Altered consciousness was significantly more frequent in the patients with higher serum calcium level. There were no differences in effectiveness among therapeutic regimens. Hypercalcemia was more frequently in the later stage of disease than during the initial diagnosis of lung cancer. Most of the patients died within 1 month after development of hypercalcemia. Conclusion: We concluded that hypercalcemia in lung cancer is related to extremely poor prognosis, and may be one of the causes of death and should be treated aggressively to prevent sudden deterioration or death.

  • PDF

Development of a Simultaneous Analytical Method for Determination of Insecticide Broflanilide and Its Metabolite Residues in Agricultural Products Using LC-MS/MS (LC-MS/MS를 이용한 농산물 중 살충제 Broflanilide 및 대사물질 동시시험법 개발)

  • Park, Ji-Su;Do, Jung-Ah;Lee, Han Sol;Park, Shin-min;Cho, Sung Min;Kim, Ji-Young;Shin, Hye-Sun;Jang, Dong Eun;Jung, Yong-hyun;Lee, Kangbong
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.2
    • /
    • pp.124-134
    • /
    • 2019
  • An analytical method was developed for the determination of broflanilide and its metabolites in agricultural products. Sample preparation was conducted using the QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method and LC-MS/MS (liquid chromatograph-tandem mass spectrometer). The analytes were extracted with acetonitrile and cleaned up using d-SPE (dispersive solid phase extraction) sorbents such as anhydrous magnesium sulfate, primary secondary amine (PSA) and octadecyl ($C_{18}$). The limit of detection (LOD) and quantification (LOQ) were 0.004 and 0.01 mg/kg, respectively. The recovery results for broflanilide, DM-8007 and S(PFP-OH)-8007 ranged between 90.7 to 113.7%, 88.2 to 109.7% and 79.8 to 97.8% at different concentration levels (LOQ, 10LOQ, 50LOQ) with relative standard deviation (RSD) less than 8.8%. The inter-laboratory study recovery results for broflanilide and DM-8007 and S (PFP-OH)-8007 ranged between 86.3 to 109.1%, 87.8 to 109.7% and 78.8 to 102.1%, and RSD values were also below 21%. All values were consistent with the criteria ranges requested in the Codex guidelines (CAC/GL 40-1993, 2003) and the Food and Drug Safety Evaluation guidelines (2016). Therefore, the proposed analytical method was accurate, effective and sensitive for broflanilide determination in agricultural commodities.

A Study on the Current Status of Prescribed Drugs in Oriental Health Insurance and their Improvement (한방건강보험 약제 투약 실태 및 활성화 방안 연구)

  • Kwon, Yong-Chan;Yoo, Wang-Keun;Seo, Bu-Il
    • The Korea Journal of Herbology
    • /
    • v.27 no.2
    • /
    • pp.1-16
    • /
    • 2012
  • Objective : To investigate the current status of prescription drugs in Oriental medical institutes and to draw up a future plan for the revitalization of Oriental medical health insurance, this survey has been performed. Method : The survey has been made with 321 doctors working at Oriental medical institutes in Daegu and Kyungbuk areas for a period of 3 month from June 1, 2010 until September 1, 2010. Result : 1. When it comes to the current status of the use of herbal drugs in Oriental Health insurance, most of doctors surveyed prescribe insurance drugs, and they prescribe insurance drugs to patients, who are less than 20% of total patients visiting their clinics. 2. The awareness of Herbal Health Care Drugs is investigated. When it comes to the understanding of the difference between insurance drugs(powder type drugs) and granular type drugs, doctors admit that they differ only in one aspect, whether or not their being covered by health insurance. Based on the survey results on the understanding of insurance coverage of granular type drugs, doctors, even though they long for granular type drugs to be accepted as insurance drugs, are worrying whether the number of outpatients might dwindle due to increased insurance co-payments. They also point out that the biggest obstacles in the expansion of the granular type drugs as insurance drugs are the lack of understanding of the government and the objection of the Health Insurance Review and Assesment service (HIRA) for fear of increased insurance claims. 3. Upon investigation on Oriental medicine doctors' understandings of herbal pharmaceutical industry, it is found that doctors' responses on pharmaceutical industry are not all positive ones('new product development and neglect of R&D infrastructure' and 'smallness of industry'). When it is investigated what area needs the greatest improvement in herbal pharmaceutical industry, 'securing sufficient capital, good manufacturing, and strengthening quality control', is the highest. 4. When it is asked what are the most needed in order to improve herbal health insurance medicine, responses such as 'the increase in the accessibility to and the utilization of Oriental medical clinics through the diversification of the means of prescriptions', 'the improvement of insurance benefits(cap adjustments)', 'increase the proportion of high quality medicinal plants', 'the ceiling of co-payments(deductible) at 20,000 won or more', 'expansion of the choices of formulations', 'formulational expansions of tablets and pills', and finally 'admittance and expansion of granular type drug as insurance drug' are the highest. 5. Upon investigating the general characteristics of the current status of the usage of Oriental health care herbal drugs, the followings are observed. First, the frequency of use of health insurance drugs by the doctors who use health insurance with general characteristics shows similar differences in case of total monthly sales amount (p<0.001), average number of daily patients (p<0.05). Secondly, as to the willingness of the expanded usage of insurance drugs, similar differences are observed in case of total monthly sales amount (p<0.05). 6. Upon investigating the general characteristics of the perception of Herbal health care drugs, the followings are observed. First, inspecting general characteristics and insurance claims due to increased co-payments(deductible amount) reveals similar differences in case of working period (p<0.01) and in case of total monthly sales amount (p <0.01). Secondly, inspecting general characteristics and the obstacles that hinder granular type drugs from being accepted as health care insurance drugs shows similar differences in case of working period (p<0.05). 7. Upon investigating the general characteristics of the understanding of Oriental Herbal pharmaceutical companies, the followings are observed. First, opinions on the general characteristics of pharmaceutical companies, when examined with variance analysis, shows similar differences in case of total monthly sales amount (p<0.05). Secondly, when opinions are examined on general characteristics and the problems of herbal pharmaceutical companies, similar differences are found in case of working period (p<0.01) and in case of total monthly sales amount (p<0.001). Lastly, opinions on the general characteristics and reforms of pharmaceutical companies, similar differences are observed in case of working period (p<0.001). 8. Upon investigating the general characteristics of the improvement of insurance Herbal drugs, the followings are observed. First, regarding general characteristics and insurance benefits, similar differences are observed in case of working period (p<0.05), in case of total monthly sales amount (p<0.05), and in case of average number of daily patients (p<0.01). Secondly, opinions on the general characteristics and the needs for the improvement of Herbal insurance drugs are examined in 5 different aspects, which are the approval of granular type drugs as insurance drugs, the expanded practices of the number of prescription insurance drugs, the needs of a variety of formulations, the needs of TFT of which numbers of Oriental medical doctors are members for the revision of the existing system, and the needs of adjusting the current ceiling of the fixed amount and the fixed rate. When processed by the analysis of variance, the results show similar differences in case of average number of daily patients (p<0.01). Conclusion : From the results of this study the first measures to take are, to reform overall insurance benefit system, including insurance co-payment system(fixed rate cap adjustment), to expand the number of the herbal drugs to be prescribed matching with insurance benefit accordingly, and to revitalize herbal medicine insurance system through the change of various formulations. In addition, it is recommended to improve the effectiveness of herbal medicine by making plans to enhance the efficacy of herbal medicine and by enabling small pharmaceutical companies to outgrow themselves.

A Study on the Buyer's Decision Making Models for Introducing Intelligent Online Handmade Services (지능형 온라인 핸드메이드 서비스 도입을 위한 구매자 의사결정모형에 관한 연구)

  • Park, Jong-Won;Yang, Sung-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.119-138
    • /
    • 2016
  • Since the Industrial Revolution, which made the mass production and mass distribution of standardized goods possible, machine-made (manufactured) products have accounted for the majority of the market. However, in recent years, the phenomenon of purchasing even more expensive handmade products has become a noticeable trend as consumers have started to acknowledge the value of handmade products, such as the craftsman's commitment, belief in their quality and scarcity, and the sense of self-esteem from having them,. Consumer interest in these handmade products has shown explosive growth and has been coupled with the recent development of three-dimensional (3D) printing technologies. Etsy.com is the world's largest online handmade platform. It is no different from any other online platform; it provides an online market where buyers and sellers virtually meet to share information and transact business. However, Etsy.com is different in that shops within this platform only deal with handmade products in a variety of categories, ranging from jewelry to toys. Since its establishment in 2005, despite being limited to handmade products, Etsy.com has enjoyed rapid growth in membership, transaction volume, and revenue. Most recently in April 2015, it raised funds through an initial public offering (IPO) of more than 1.8 billion USD, which demonstrates the huge potential of online handmade platforms. After the success of Etsy.com, various types of online handmade platforms such as Handmade at Amazon, ArtFire, DaWanda, and Craft is ART have emerged and are now competing with each other, at the same time, which has increased the size of the market. According to Deloitte's 2015 holiday survey on which types of gifts the respondents plan to buy during the holiday season, about 16% of U.S. consumers chose "homemade or craft items (e.g., Etsy purchase)," which was the same rate as those for the computer game and shoes categories. This indicates that consumer interests in online handmade platforms will continue to rise in the future. However, this high interest in the market for handmade products and their platforms has not yet led to academic research. Most extant studies have only focused on machine-made products and intelligent services for them. This indicates a lack of studies on handmade products and their intelligent services on virtual platforms. Therefore, this study used signaling theory and prior research on the effects of sellers' characteristics on their performance (e.g., total sales and price premiums) in the buyer-seller relationship to identify the key influencing e-Image factors (e.g., reputation, size, information sharing, and length of relationship). Then, their impacts on the performance of shops within the online handmade platform were empirically examined; the dataset was collected from Etsy.com through the application of web harvesting technology. The results from the structural equation modeling revealed that the reputation, size, and information sharing have significant effects on the total sales, while the reputation and length of relationship influence price premiums. This study extended the online platform research into online handmade platform research by identifying key influencing e-Image factors on within-platform shop's total sales and price premiums based on signaling theory and then performed a statistical investigation. These findings are expected to be a stepping stone for future studies on intelligent online handmade services as well as handmade products themselves. Furthermore, the findings of the study provide online handmade platform operators with practical guidelines on how to implement intelligent online handmade services. They should also help shop managers build their marketing strategies in a more specific and effective manner by suggesting key influencing e-Image factors. The results of this study should contribute to the vitalization of intelligent online handmade services by providing clues on how to maximize within-platform shops' total sales and price premiums.

A study on the Regulatory Environment of the French Distribution Industry and the Intermarche's Management strategies

  • Choi, In-Sik;Lee, Sang-Youn
    • The Journal of Industrial Distribution & Business
    • /
    • v.3 no.1
    • /
    • pp.7-16
    • /
    • 2012
  • Despite the enforcement of SSM control laws such as 'the Law of Developing the Distribution Industry (LDDI)' and 'the Law of Promoting Mutual Cooperation between Large and Small/medium Enterprises (LPMC)' stipulating the business adjustment system, the number of super-supermarkets (SSMs) has ever been expanding in Korea. In France, however, Super Centers are being regulated most strongly and directly in the whole Europe viewing that there is not a single SSM in Paris, which is emphasized to be the outcome from French government's regulation exerted on the opening of large scale retail stores. In France, the authority to approve store opening is deeply centralized and the store opening regulation is a socio-economic regulation driven by economic laws whereas EU strongly regulates the distribution industry. To control the French distribution industry, such seven laws and regulations as Commission départementale d'urbanisme commercial guidelines (CDLIC) (1969), the Royer Law (1973), the Doubin Law (1990), the Sapin Law (1993), the Raffarin Law (1996), solidarite et renouvellement urbains (SRU) (2000), and Loi de modernisation de l'économie (LME) (2009) have been promulgated one by one since the amendment of the Fontanet guidelines, through which commercial adjustment laws and regulations have been complemented and reinforced while regulatory measures have been taken. Even in the course of forming such strong regulatory laws, InterMarche, the largest supermarket chain in France, has been in existence as a global enterprise specialized in retail distribution with over 4,000 stores in Europe. InterMarche's business can be divided largely into two segments of food and non-food. As a supermarket chain, InterMarche's food segment has 2,300 stores in Europe and as a hard-discounter store chain in France, Netto has 420 stores. Restaumarch is a chain of traditional family restaurants and the steak house restaurant chain of Poivre Rouge has 4 restaurants currently. In addition, there are others like Ecomarche which is a supermarket chain for small and medium cities. In the non-food segment, the DIY and gardening chain of Bricomarche has a total of 620 stores in Europe. And the car-related chain of Roady has a total of 158 stores in Europe. There is the clothing chain of Veti as well. In view of InterMarche's management strategies, since its distribution strategy is to sell goods at cheap prices, buying goods cheap only is not enough. In other words, in order to sell goods cheap, it is all important to buy goods cheap, manage them cheap, systemize them cheap, and transport them cheap. In quality assurance, InterMarche has guaranteed the purchase safety for consumers by providing its own private brand products. InterMarche has 90 private brands of its own, thus being the retailer with the largest number of distributor brands in France. In view of its IT service strategy, InterMarche is utilizing a high performance IT system so as to obtainas much of the market information as possible and also to find out the best locations for opening stores. In its global expansion strategy of international alliance, InterMarche has established the ALDIS group together with the distribution enterprises of both Spain and Germany in order to expand its food purchase, whereas in the non-food segment, it has established the ARENA group in alliance with 11 international distribution enterprises. Such strategies of InterMarche have been intended to find out the consumer needs for both price and quality of goods and to secure the purchase and supply networks which are closely localized. It is necessary to cope promptly with the constantly changing circumstances through being unified with relevant regions and by providing diversified customer services as well. In view of the InterMarche's positive policy for promoting local partnerships as well as the assistance for enhancing the local economic structure, implications are existing for those retail distributors of our country.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.