• 제목/요약/키워드: additional value

검색결과 1,541건 처리시간 0.041초

보래봉 일대의 관속식물상 (A Flora of Vascular Plants in Boraebong)

  • 김지은;김영수;이정심;장주은;정현진;김알렉세이;한상국;길희영
    • 한국자원식물학회지
    • /
    • 제37권1호
    • /
    • pp.35-61
    • /
    • 2024
  • 식물상 연구는 종 다양성 현황을 파악할 수 있을 뿐만 아니라 증거표본을 바탕으로 변화를 평가 및 예측하고 한반도의 기후환경 및 생물다양성을 기록할 수 있다. 보래봉은 보존 가치가 있는 식물자원을 보유하나 등산로와 임도를 통한 인위적인 간섭이 우려되는 지역이다. 따라서, 본 연구는 증거표본 및 사진자료에 기초하여 보래봉 일대의 관속식물을 보고하고, 보래봉의 생물다양성 보존을 위한 기초자료로 활용하고자 한다. 또한 선행연구와 비교를 통해 추가적인 침입 외래식물 유입 방지에 활용하고자 한다. 2022년 4월부터 11월까지 총 11회에 걸쳐 현지 조사를 조사한 결과, 87과 269속 401종 13아종 35변종 6품종으로 총 455분류군이 조사되었다. 또한, 적색목록은 총 4분류군(EN 2분류군, NT 2분류군), 한반도 특산식물은 18분류군, 식물구계학적 특정식물은 102분류군(V 등급 1분류군, IV 등급 16분류군, III 등급 31분류군, II 등급 31분류군, I등급 23분류군)이 나타났다. 게다가 보래봉에서 외래식물 17분류군, 생태계교란식물 2분류군, 자원식물은 439분류군이 확인되었다. 선행 연구와의 비교 결과, 38과 76속 86종 1아종 8변종 3품종 총 98분류군이 처음으로 확인되었다.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • 제20권2호
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발 (DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA)

  • 박만배
    • 대한교통학회:학술대회논문집
    • /
    • 대한교통학회 1995년도 제27회 학술발표회
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가 (Feasibility of Deep Learning Algorithms for Binary Classification Problems)

  • 김기태;이보미;김종우
    • 지능정보연구
    • /
    • 제23권1호
    • /
    • pp.95-108
    • /
    • 2017
  • 최근 알파고의 등장으로 딥러닝 기술에 대한 관심이 고조되고 있다. 딥러닝은 향후 미래의 핵심 기술이 되어 일상생활의 많은 부분을 개선할 것이라는 기대를 받고 있지만, 주요한 성과들이 이미지 인식과 자연어처리 등에 국한되어 있고 전통적인 비즈니스 애널리틱스 문제에의 활용은 미비한 실정이다. 실제로 딥러닝 기술은 Convolutional Neural Network(CNN), Recurrent Neural Network(RNN), Deep Boltzmann Machine (DBM) 등 알고리즘들의 선택, Dropout 기법의 활용여부, 활성 함수의 선정 등 다양한 네트워크 설계 이슈들을 가지고 있다. 따라서 비즈니스 문제에서의 딥러닝 알고리즘 활용은 아직 탐구가 필요한 영역으로 남아있으며, 특히 딥러닝을 현실에 적용했을 때 발생할 수 있는 여러 가지 문제들은 미지수이다. 이에 따라 본 연구에서는 다이렉트 마케팅 응답모델, 고객이탈분석, 대출 위험 분석 등의 주요한 분류 문제인 이진분류에 딥러닝을 적용할 수 있을 것인지 그 가능성을 실험을 통해 확인하였다. 실험에는 어느 포르투갈 은행의 텔레마케팅 응답여부에 대한 데이터 집합을 사용하였으며, 전통적인 인공신경망인 Multi-Layer Perceptron, 딥러닝 알고리즘인 CNN과 RNN을 변형한 Long Short-Term Memory, 딥러닝 모형에 많이 활용되는 Dropout 기법 등을 이진 분류 문제에 활용했을 때의 성능을 비교하였다. 실험을 수행한 결과 CNN 알고리즘은 비즈니스 데이터의 이진분류 문제에서도 MLP 모형에 비해 향상된 성능을 보였다. 또한 MLP와 CNN 모두 Dropout을 적용한 모형이 적용하지 않은 모형보다 더 좋은 분류 성능을 보여줌에 따라, Dropout을 적용한 CNN 알고리즘이 이진분류 문제에도 활용될 수 있는 가능성을 확인하였다.

서비스제공자와 사용자의 인식차이 분석을 통한 소셜커머스 핵심성공요인에 대한 연구: 한국의 티켓몬스터 중심으로 (A Study on the Critical Success Factors of Social Commerce through the Analysis of the Perception Gap between the Service Providers and the Users: Focused on Ticket Monster in Korea)

  • 김일중;이대철;임규건
    • Asia pacific journal of information systems
    • /
    • 제24권2호
    • /
    • pp.211-232
    • /
    • 2014
  • Recently, there is a growing interest toward social commerce using SNS(Social Networking Service), and the size of its market is also expanding due to popularization of smart phones, tablet PCs and other smart devices. Accordingly, various studies have been attempted but it is shown that most of the previous studies have been conducted from perspectives of the users. The purpose of this study is to derive user-centered CSF(Critical Success Factor) of social commerce from the previous studies and analyze the CSF perception gap between social commerce service providers and users. The CSF perception gap between two groups shows that there is a difference between ideal images the service providers hope for and the actual image the service users have on social commerce companies. This study provides effective improvement directions for social commerce companies by presenting current business problems and its solution plans. For this, This study selected Korea's representative social commerce business Ticket Monster, which is dominant in sales and staff size together with its excellent funding power through M&A by stock exchange with the US social commerce business Living Social with Amazon.com as a shareholder in August, 2011, as a target group of social commerce service provider. we have gathered questionnaires from both service providers and the users from October 22, 2012 until October 31, 2012 to conduct an empirical analysis. We surveyed 160 service providers of Ticket Monster We also surveyed 160 social commerce users who have experienced in using Ticket Monster service. Out of 320 surveys, 20 questionaries which were unfit or undependable were discarded. Consequently the remaining 300(service provider 150, user 150)were used for this empirical study. The statistics were analyzed using SPSS 12.0. Implications of the empirical analysis result of this study are as follows: First of all, There are order differences in the importance of social commerce CSF between two groups. While service providers regard Price Economic as the most important CSF influencing purchasing intention, the users regard 'Trust' as the most important CSF influencing purchasing intention. This means that the service providers have to utilize the unique strong point of social commerce which make the customers be trusted rathe than just focusing on selling product at a discounted price. It means that service Providers need to enhance effective communication skills by using SNS and play a vital role as a trusted adviser who provides curation services and explains the value of products through information filtering. Also, they need to pay attention to preventing consumer damages from deceptive and false advertising. service providers have to create the detailed reward system in case of a consumer damages caused by above problems. It can make strong ties with customers. Second, both service providers and users tend to consider that social commerce CSF influencing purchasing intention are Price Economic, Utility, Trust, and Word of Mouth Effect. Accordingly, it can be learned that users are expecting the benefit from the aspect of prices and economy when using social commerce, and service providers should be able to suggest the individualized discount benefit through diverse methods using social network service. Looking into it from the aspect of usefulness, service providers are required to get users to be cognizant of time-saving, efficiency, and convenience when they are using social commerce. Therefore, it is necessary to increase the usefulness of social commerce through the introduction of a new management strategy, such as intensification of search engine of the Website, facilitation in payment through shopping basket, and package distribution. Trust, as mentioned before, is the most important variable in consumers' mind, so it should definitely be managed for sustainable management. If the trust in social commerce should fall due to consumers' damage case due to false and puffery advertising forgeries, it could have a negative influence on the image of the social commerce industry in general. Instead of advertising with famous celebrities and using a bombastic amount of money on marketing expenses, the social commerce industry should be able to use the word of mouth effect between users by making use of the social network service, the major marketing method of initial social commerce. The word of mouth effect occurring from consumers' spontaneous self-marketer's duty performance can bring not only reduction effect in advertising cost to a service provider but it can also prepare the basis of discounted price suggestion to consumers; in this context, the word of mouth effect should be managed as the CSF of social commerce. Third, Trade safety was not derived as one of the CSF. Recently, with e-commerce like social commerce and Internet shopping increasing in a variety of methods, the importance of trade safety on the Internet also increases, but in this study result, trade safety wasn't evaluated as CSF of social commerce by both groups. This study judges that it's because both service provider groups and user group are perceiving that there is a reliable PG(Payment Gateway) which acts for e-payment of Internet transaction. Accordingly, it is understood that both two groups feel that social commerce can have a corporate identity by website and differentiation in products and services in sales, but don't feel a big difference by business in case of e-payment system. In other words, trade safety should be perceived as natural, basic universal service. Fourth, it's necessary that service providers should intensify the communication with users by making use of social network service which is the major marketing method of social commerce and should be able to use the word of mouth effect between users. The word of mouth effect occurring from consumers' spontaneous self- marketer's duty performance can bring not only reduction effect in advertising cost to a service provider but it can also prepare the basis of discounted price suggestion to consumers. in this context, it is judged that the word of mouth effect should be managed as CSF of social commerce. In this paper, the characteristics of social commerce are limited as five independent variables, however, if an additional study is proceeded with more various independent variables, more in-depth study results will be derived. In addition, this research targets social commerce service providers and the users, however, in the consideration of the fact that social commerce is a two-sided market, drawing CSF through an analysis of perception gap between social commerce service providers and its advertisement clients would be worth to be dealt with in a follow-up study.

실험적으로 유발시킨 VX2 동물모델에서의 Mn-phthalocyanine과 Mangafodipir trisodium의 비교영상 (The Comparative Imaging Study on Mn-phthalocyanine and Mangafodipir trisodium in Experimental VX2 Animal Model)

  • 박현정;고성민;김용선;장용민
    • Investigative Magnetic Resonance Imaging
    • /
    • 제8권1호
    • /
    • pp.32-41
    • /
    • 2004
  • 목적 : MnPC의 자기이완성질을 살펴보고, 토끼의 간에 이식한 VX2 암종을 이용해 자기공명영상에서 간의 조영증강형태를 관찰하고자하였다. 또한 간세포 특이성 조영제 사용 시와 비교하여 MnPC의 조직특이성 조영제로서의 가능성을 탐색해 보고자 하였다. 대상 및 방법 : 조영제 합성시 상자성 원소의 배위자로 phthalocyanine(PC)를 선택하였다. 2.01 g (5.2 mmol)의 phthalocyanine을 0.37g(1.4 mmol)의 manganese chloride와 $310^{\circ}C$에서 36시간동안 반응시킨 후 혼합물을 크로마토그래피 (CHC13: CH3OH=98:2, volume ratio)로 정제하여 2000 달톤의 분자량을 갖는 1.04 g $(46\%)$의 MnPC를 얻었다. 자기이완율은 MnPC를 0.1 mmol로 희석시켜 1.5 T (64 MHz)에서 측정하였다. VX2 암종은 토끼의 간실질 내에 종양세포 부유액을 주입해 실험적으로 유발시켰다. 모든 영상은 1.5 T MR장비에서 무릎관절코일을 사용하여 획득하였다. 본 연구에서 새로 개발된 거대분자 자기공명영상 조영제인 MnPC (4m-mol/kg)와 간세포 특이성 조영제인 Mn-DPDP (0.01 mmol/kg)가 사용되었으며 이들 조영제는 토끼의 이정맥을 통해 주입되었다. T1 강조영상은 스핀에코 (TR/TE=516/14 msec)와 고속다면회손경사회복 (TR/TE=80/4 m-sec, flip angle $60^{\circ}$)을 사용하여 얻었고, T2 강조영상의 획득을 위해서는 고속스핀에코 (TR/TE=1200/85 m-sec)를 사용하였다. 결과 MnPC의 1.5T (64MHz)에서의 자기이완율은 $R1=7.28\;mM^{-1}S^{-1}$, $R2=55.56\;mM^{-1}S^{-1}$ 이었고, MnPC의 높은 T2 자기이완율은 T2 강조영상에서의 정상 간실질의 신호강도를 감소시켜 간실질과 VX2 암종의 구분을 쉽게 하였다. MnPC 주입시 T1 강조영상에서는 간세포 특이성 조영제의 주입시보다 종양의 경계가 명확하였고, 조영증강은 주입후 최소한시간 이상 높게 유지되었다. 결론 : MnPC가 간세포로 흡수되어 담관으로 배설된다는 사실은 Mn-DPDP와 유사한 특성이며 이는 MnPC가 새로운 간특이성 조영제임을 확인시켜 준다. 또한 MnPC의 R2값이 기존의 조영제에 비해 매우 크다는 사실은 T1 조영제로서 뿐만 아니라 T2 조영제로서의 사용 가능성을 보여준다. 이러한 사실들을 좀더 정확하게 뒷받침하기 위해서는 임상적으로 사용되기 이전에 생체 내 그리고 시험관내 연구가 더 필요하며 다른 동물모델에서의 추가적인 연구가 요구된다.

  • PDF

STZ-당뇨쥐에서 운동부하가 골격근 및 간의 항산화효소 활성도에 미치는 영향 (Effect of Exercise on Antioxidant Enzyme Activities of Skeletal Muscle and Liver in STZ-diabetic Rats)

  • 석광호;이석강
    • Journal of Yeungnam Medical Science
    • /
    • 제17권1호
    • /
    • pp.21-30
    • /
    • 2000
  • 당뇨쥐에서 운동부하가 골격근과 간의 항산화효소 활성도에 미치는 영향과 산소유리기에 의한 조직손상 여부블 관찰한 연구 결과를 요약하면 다음과 같다. Strcptozotocin으로 유도한 당뇨군의 혈당농도(mg/dL)는 $344{\pm}14.8$로서 대조군의 $117{\pm}2.7$보다 높았으며(p<0.001) 운동부하에 의해서 유의하게 감소하였다(p<0.01). 혈장 인슐린 농도(${\mu}U/mL$)는 당뇨군에서 $8.5{\pm}0.5$로서 대조군의 $20.6{\pm}1.4$보다 유의하게 낮았으며(p<0.001) 운동부하후에는 운동부하전과 비교하여 차이가 없었다. 당뇨군에서 실제 운동부하의 정도를 평가하기 위해서 측정한 운동부하후 골격 끈파 간의 당원농도(mg/100 g wet wt.)는 각각 $1.0{\pm}0.1$$7.7{\pm}0.8$로서 운동부하전과 비교시 모두 유의하게 감소하였다(p<0.001, p<0.01). 당뇨군의 골격근과 간의 항산화효소 즉 superoxide dismutase(SOD), glutathionc pcroxidase(GPX) 및 catalase(CAT)의 활성도는 운동부하에 의해서 각기 다른 반응을 보였다. 골격근의 SOD 활성도(unit/mg protein)는 대조군에서 $6.3{\pm}0.2$였으며 당뇨군에서는 $5.8{\pm}0.2$로서 대조군과의 사이에 유의한 차이를 발견할 수 없었으나 운동부하후에는 $5.0{\pm}0.1$로서 대조군과 운동부하전 당뇨군보다 유의하게 감소하였다(p<0.001, p<0.01). GPX 활성도(nmol/min/mg protein)는 당뇨군에서 운동부하전후에 각각 $2.3{\pm}0.2$$1.8{\pm}0.1$로서 대조군의 $1.6{\pm}0.0$보다 다같이 높았으나(p<0.05, p<0.05) 운동부하에 의해서 영향을 받지 않았다. CAT 활성도(pmol/min/mg protein)는 당뇨군에서 $7.6{\pm}0.7$로서 대조군의 $6.3{\pm}0.7$과 비교하여 차이가 없었으나 훈동부하후에는 $4.6{\pm}0.3$으로서 대조군보다 감소하였으며(p<0.05) 당뇨군의 운동부하전보다도 감소하였다(p<0.01). 당뇨군의 MDA 농도는 대조군과 비교하여 차이가 없었으며 당뇨군에서 운동부하에 의한 영향도 받지 않았다. 간의 SOD 활성도는 대조군에서 $11.3{\pm}0.2$였으며 운동부하전 당뇨군에서는 $9.6{\pm}0.3$으로서 유의하게 감소하였다(p<0.01). 당뇨군에서 운동부하전후 측정한 SOD 활성모는 대조군과 비교하여 다같이 감소하였으나(p<0.01, p<0.001), 운동부하에 의한 영향은 없었다. 당뇨군외 GPX와 CAT의 활성도는 대조군과 비교하여 유의한 차이가 없었으며 당뇨군에서 운동 부하에 의한 변화도 없었다. 운동부하전 당뇨군의 MDA 농도(nmol/g wet wt.)는 $38.5{\pm}1.3$으로서 대조군의 $24.8{\pm}0.9$에 비해서 유의하게 증가하였으며(p<0.001) 운동부하에 의해서는 대조군보다는 높았으나(p<0.001) 운동부하전과 비교하여서는 차이가 없었다. 이상의 결과를 종합하면 당뇨쥐에서 골격근은 운동부하로 인한 산화 스트레스에 대한 적응과정을 통해서 손상이 없었으나, 간 조직은 당뇨병 자체로 인한 산소유리기의 발생으로 손상의 위험이 있었으나 운동부하에 의한 더 이상의 손상은 없었다.

  • PDF

관상동맥-폐동맥 이상기시증(Anomalous Origin of Coronary Artery from Pulmonary Artery)의 수술적 치료: 중기 성적과 좌심실 및 승모판 기능의 변화 양상에 대한 연구 (Surgical Treatment of Anomalous Origin of Coronary Artery from the Pulmonary Artery: Postoperative Changes of Ventricular Dimensions and Mitral Regurgitation)

  • 강창현;김웅한;서홍주;김재현;이철;장윤희;황성욱;백만종;오삼세;나찬영;한재진;이영탁;김종환
    • Journal of Chest Surgery
    • /
    • 제37권1호
    • /
    • pp.19-26
    • /
    • 2004
  • 배경: 이 논문의 목적은 관상동맥-폐동맥 이상기시증의 수술적 치료의 중기 성적을 확인하고 수술 후 좌심실 기능과 승모판 폐쇄부전의 변화양상을 분석하고자 하였다. 대상 및 방법: 1985년부터 2003년 까지 관상동맥-폐동맥 이상기시증으로 수술적 치료를 받은 15명의 환자들을 연구대상으로 하였다. 1998년 이전(9명)에는 다양한 수술방법을 사용하였으나, 1998년 이후(6명)부터는 1) 대동맥과 주폐동맥 양쪽 모두 관류와 심정지액 주입을 시행하였고, 2)관상동맥의 재이식 시 주폐동맥의 일부를 이용하여 도관을 만들어 대동맥에 연결하였고, 그리고 3)승모판 폐쇄부전은 특별한 수술적 교정을 시행하지 않았다. 결과: 대상 환자들의 연령의 중앙값은 6개월(1개월∼34년)이었다. 수술방법은 좌쇄골동맥-좌전행지 문합술이 1예, 좌관상동맥 결찰술이 2예, Takeuchi 술식이 2예, 그리고 관상동맥 재이식술이 10예에서 시행되었다. 평균추적관찰 기간은 5.5 $\pm$ 5.8년(2개월 ∼ 14년)이었으며, 수술 후 조기사망이 1예, 만기사망이 1예에서 발생하여 5년 생존율은 85.6$\pm$9.6%이었다. 수술전 좌심실 이완기말과 수축기말직경은 수술 후 3개월 이내에 의미있게 감소하였고(p<0.05), 수술 전 3도 이상의 의미 있는 승모판 폐쇄부전은 6예(40.0%)에서 관찰되었으나 모두 수술 후 1개월 이내에 2도 이하로 감소하였다. 3예의 환자에서 재수술이 필요하였으며 재수술의 원인은 관상동맥 문합부위의 협착과 승모판 폐쇄부전이었다. 그러나 1998년 이후의 환자군에서는 조기사망, 만기사망, 그리고 재수술 등의 예가 없었다. 결론: 관상동맥-폐동맥 이상기시증은 수술 후 만족할 만한 생존율과 좌심실직경의 감소와 그리고 승모판 폐쇄부전의 개선을 확인할 수 있었다 그러나 장기적으로 관상동맥의 문제가 승모판 폐쇄부전의 재발과 장기생존율에 영향을 미칠 수 있음을 확인할 수 있었다. 1998년 이후 단일화된 수술방침으로 시행한 결과 만족할 만한 결과를 얻을 수 있었다.

트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석 (Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints)

  • 윤은일;편광범
    • 인터넷정보학회논문지
    • /
    • 제16권1호
    • /
    • pp.67-74
    • /
    • 2015
  • 최근, 아이템들의 가치를 고려한 빈발 아이템셋 마이닝 방법은 데이터 마이닝 분야에서 가장 중요한 이슈 중 하나로 활발히 연구되어왔다. 아이템들의 가치를 고려한 마이닝 기법들은 적용 방법에 따라 크게 가중화 빈발 아이템셋 마이닝, 트랜잭션 가중치 기반의 빈발 아이템셋 마이닝, 유틸리티 아이템셋 마이닝으로 구분된다. 본 논문에서는 트랜잭션 가중치 기반의 빈발 아이템셋 마이닝들에 대해 실증적인 분석을 수행한다. 일반적으로 트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법들은 데이터베이스 내 아이템들의 가치를 고려함으로써 트랜잭션 가중치를 계산한다. 또한, 그 기법들은 계산된 각 트랜잭션의 가중치를 바탕으로 가중화 빈발 아이템셋들을 마이닝 한다. 트랜잭션 가중치는 트랜잭션 내에 높은 가치의 아이템이 많이 포함 될수록 높은 값으로 나타나기 때문에 우리는 각 트랜잭션의 가중치의 분석을 통해 그 가치를 파악할 수 있다. 우리는 트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법 중에서 가장 유명한 알고리즘인 WIS와 WIT-FWIs, IT-FWIs-MODIFY, WIT-FWIs-DIFF의 장 단점을 분석하고 각각의 성능을 비교한다. WIS는 트랜잭션 가중치 기반의 빈발 아이템셋 마이닝의 개념과 그 기법이 처음 제안된 알고리즘이며, 전통적인 빈발 아이템셋 마이닝 기법인 Apriori를 기반으로 하고 있다. 또 다른 트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 방법인 WIT-FWIs와 WIT-FWIs-MODIFY, WIT-FWIs-DIFF는 가중화된 빈발 아이템셋 마이닝을 더 효율적으로 수행하기 위해 격자구조(Lattice) 형태의 특별한 저장구조인 WIT-tree를 이용한다. WIT-tree의 각 노드에는 아이템셋 정보와 아이템셋이 포함된 트랜잭션의 ID들이 저장되며, 이 구조를 사용함으로써 아이템셋 마이닝 과정에서 발생되는 다수의 데이터베이스 스캔 과정이 감소된다. 특히, 전통적인 알고리즘들이 수많은 데이터베이스 스캔을 수행하는 반면에, 이 알고리즘들은 WIT-tree를 이용해 데이터베이스를 오직 한번만 읽음으로써 마이닝과정에서 발생 가능한 오버헤드 문제를 해결한다. 또한, 공통적으로 길이 N의 두 아이템셋을 이용해 길이 N+1의 새로운 아이템셋을 생성한다. 먼저, WIT-FWIs는 각 아이템셋이 동시에 발생되는 트랜잭션들의 정보를 활용하는 것이 특징이다. WIT-FWIs-MODIFY는 조합되는 아이템셋의 정보를 이용해 빈도수 계산에 필요한 연산을 줄인 알고리즘이다. WIT-FWIs-DIFF는 두 아이템셋 중 하나만 발생한 트랜잭션의 정보를 이용한다. 우리는 다양한 실험환경에서 각 알고리즘의 성능을 비교분석하기 위해 각 트랜잭션의 형태가 유사한 dense 데이터와 각 트랜잭션의 구성이 서로 다른 sparse 데이터를 이용해 마이닝 시간과 최대 메모리 사용량을 평가한다. 또한, 각 알고리즘의 안정성을 평가하기 위한 확장성 테스트를 수행한다. 결과적으로, dense 데이터에서는 WIT-FWIs와 WIT-FWIs-MODIFY가 다른 알고리즘들보다 좋은 성능을 보이고 sparse 데이터에서는 WIT-FWI-DIFF가 가장 좋은 효율성을 갖는다. WIS는 더 많은 연산을 수행하는 알고리즘을 기반으로 했기 때문에 평균적으로 가장 낮은 성능을 보인다.

투과선량을 이용한 온라인 선량측정에서 불균질조직에 대한 선량 보정 (Inhomogeneity correction in on-line dosimetry using transmission dose)

  • 우홍균;허순녕;이형구;하성환
    • Journal of Radiation Protection and Research
    • /
    • 제23권3호
    • /
    • pp.139-147
    • /
    • 1998
  • 목적 : 환자를 통과한 투과선량으로부터 알고리즘을 이용하여 종양선량을 계산하는 새로운 개념의 온라인 선량측정시 인체 조직내의 폐 등 불균질조직의 존재는 인체내 종양선량 및 투과선량에 영향을 미친다. 인체내에 불균질조직이 존재하는 경우 측정된 투과선량으로부터 종양선량 환산시 밀도를 이용한 보정의 정확도를 확인하기 위하여 실험을 시행하였다. 방법: 폐조직의 밀도와 유사한 재질인 코르크 (밀도 $0.202\;gm/cm^3$) 팬톰 (CP) 과 연부조직의 밀도와 유사한 재질인 폴리스티렌 (밀도 $1.040gm/cm^3$) 팬톰 (PP)을 사용하였으며 인체의 흥부와 유사한 조건에서 측정하였다. 즉 흥부에 방사선이 전후 방향에서 조사될 경우에 해당하는 팬톰은 3cm 두께의 PP을 CP 상하에 위치하였으며 CP의 두께는 5, 10, 20cm 으로 하였다. 흥부에 방사선이 측면에서 조사되는 경우에 해당하는 팬톰은 중앙에 종격동에 해당하는 6cm 두께의 PP 을 위치하고 좌우에 10cm 두께의 CP 을 위치하였으며 그 외측에 다시 3 cm 두께의 PP 을 위치하였다. 4 MV, 6 MV 및 10 MV X 선을 사용하였으며 조사면의 크기는 $3{\times}3$ 내지 $20{\times}20cm$의 범위, 팬톰-전리함간 거리 (phantom-chamber distance, PCD) 는 10-50 cm 으로 하였다. 또한 두 물질에 대한 밀도차를 이용하여 CP 과 동일한 방사선 감쇄를 나타낼 것으로 예상되는 두께의 PP 을 CP 대신 위치하여 동일한 방법으로 측정하여 비교하였다. 결과: 밀도를 이용하여 보정한 CP 와 등가두께의 PP 을 사용한 경우의 투과선량은 CP 을 사용한 경우에 비하여 CP 의 두께 5cm 인 경우 4, 6, 10MV에서 각각 평균 0.18(${\pm}0.27$) %, 0.10(${\pm}0.43$) %, 0.33(${\pm}0.30$) %의 오차를 보였다. CP 의 두께 10cm 인 경우에는 에너지별로 0.23(${\pm}0.73$) %, 0.05(${\pm}0.57$) %, 0.04(${\pm}0.40$) %, 20cm 인 경우에는 0.55(${\pm}0.36$) %, 0.34(${\pm}0.27$) %, 0.34(${\pm}0.18$) % 의 오차를 보였다 중간에 6 cm 의 PP 을 위치한 경우에는 에너지별로 1.15(${\pm}1.86$) %, 0.90(${\pm}1.43$)%, 0.86(${\pm}1.01$)% 의 오차를 나타내었다. 이 경우에는 PCD 10 cm 의 경우에 비교적 큰 오차를 보였으며 PCD 10 cm 인 경우를 제외하면 에너지별로 0.47(${\pm}1.17$) %, 0.42(${\pm}0.96$) %, 0.55(${\pm}0.77$0.77) % 의 오차로 크게 감소하였다. 결론: 방사선이 통과하는 경로에 불균질조직인 폐가 존재할 경우에도 불균질조직에 대하여 조직의 밀도를 이용하여 보정하는 방법을 사용하여 투과선량으로부터 종양선량을 계산할 수 있음을 알 수 있었다.

  • PDF