• 제목/요약/키워드: Issue-network

검색결과 1,537건 처리시간 0.027초

빅카인즈를 활용한 GenAI(생성형 인공지능) 기술 동향 분석: ChatGPT 등장과 스타트업 영향 평가 (GenAI(Generative Artificial Intelligence) Technology Trend Analysis Using Bigkinds: ChatGPT Emergence and Startup Impact Assessment)

  • 이현주;성창수;전병훈
    • 벤처창업연구
    • /
    • 제18권4호
    • /
    • pp.65-76
    • /
    • 2023
  • 기술 창업 및 스타트업 분야에서는 인공지능(AI)의 발전이 사업 모델 혁신의 핵심 주제로 부상하였다. 이를 통해 벤처기업들은 경쟁력 확보를 위해 AI를 중심으로 다양한 노력을 기울이고 있다. 본 연구는 GenAI 기술의 발전과 스타트업 생태계 간의 관계를 국내 뉴스 기사를 분석하여, 기술 창업 분야의 동향을 파악하는 것을 목적으로 하였다. 본 연구는 빅카인즈(BIG Kinds)를 활용하여 1990년부터 2023년 8월 10일까지의 국내 뉴스 기사에서 ChatGPT의 등장 전후를 중심으로 GenAI 관련 뉴스 기사, 주요 이슈 및 트렌드의 변화를 조사하였으며, 네트워크 분석 및 키워드 시각화를 통해 관련성을 시각화하였다. 연구결과, 2017년부터 2023년까지 GenAI에 대한 언급이 기사 내에서 점차 증가하였다. 특히, OpenAI의 GPT-3.5를 기반으로 한 ChatGPT 서비스가 주요 이슈로 부각 되었는데, 이 서비스는 OpenAI의 DALL-E, Google의 MusicLM, VoyagerX의 Vrew 등과 같은 언어 모델 기반 GenAI 기술의 대중화를 시사하였다. 이로써 생성형 인공지능은 다양한 분야에서의 유용성을 입증하며, ChatGPT 출시 이후 국내 기업들의 한국어 언어 모델 개발 활동이 활발히 이루어지고 있는 것으로 확인되었다. 리튼 테크놀로지스와 같은 스타트업들도 GenAI를 활용하여 기술 창업 분야에서의 영역을 확장하고 있다. 본 연구에서는 GenAI 기술과 스타트업 창업 활동 간의 연관성을 확인하였으며, 이는 혁신적인 비즈니스 전략의 구축 지원을 시사하며 GenAI 기술의 발전과 스타트업 생태계의 성장을 지속해서 형성할 것으로 전망된다. 더 나아가 국제적 동향 및 다양한 분석 방법의 활용, 실제 현장에서의 GenAI 응용 가능성을 모색하는 연구가 요구 된다. 이러한 노력은 GenAI 기술의 발전과 스타트업 생태계의 성장 발전에 이바지할 것으로 기대된다.

  • PDF

멀티 모달리티 데이터 활용을 통한 골다공증 단계 다중 분류 시스템 개발: 합성곱 신경망 기반의 딥러닝 적용 (Multi-classification of Osteoporosis Grading Stages Using Abdominal Computed Tomography with Clinical Variables : Application of Deep Learning with a Convolutional Neural Network)

  • 하태준;김희상;강성욱;이두희;김우진;문기원;최현수;김정현;김윤;박소현;박상원
    • 한국방사선학회논문지
    • /
    • 제18권3호
    • /
    • pp.187-201
    • /
    • 2024
  • 골다공증은 전 세계적으로 주요한 건강 문제임에도 불구하고, 골절 발생 전까지 쉽게 발견되지 않는 단점을 가지고 있습니다. 본 연구에서는 골다공증 조기 발견 능력 향상을 위해, 복부 컴퓨터 단층 촬영(Computed Tomography, CT) 영상을 활용하여 정상-골감소증-골다공증으로 구분되는 골다공증 단계를 체계적으로 분류할 수 있는 딥러닝(Deep learning, DL) 시스템을 개발하였습니다. 총 3,012개의 조영제 향상 복부 CT 영상과 개별 환자의 이중 에너지 X선 흡수 계측법(Dual-Energy X-ray Absorptiometry, DXA)으로 얻은 T-점수를 활용하여 딥러닝 모델 개발을 수행하였습니다. 모든 딥러닝 모델은 비정형 이미지 데이터, 정형 인구 통계 정보 및 비정형 영상 데이터와 정형 데이터를 동시에 활용하는 다중 모달 방법에 각각 모델 구현을 실현하였으며, 모든 환자들은 T-점수를 통해 정상, 골감소증 및 골다공증 그룹으로 분류되었습니다. 가장 높은 정확도를 갖는 모델 우수성은 비정형-정형 결합 데이터 모델이 가장 우수하였으며, 수신자 조작 특성 곡선 아래 면적이 0.94와 정확도가 0.80를 제시하였습니다. 구현된 딥러닝 모델은 그라디언트 가중치 클래스 활성화 매핑(Gradient-weighted Class Activation Mapping, Grad-CAM)을 통해 해석되어 이미지 내에서 임상적으로 관련된 특징을 강조했고, 대퇴 경부가 골다공증을 통해 골절 발생이 높은 위험 부위임을 밝혔습니다. 이 연구는 DL이 임상 데이터에서 골다공증 단계를 정확하게 식별할 수 있음을 보여주며, 조기에 골다공증을 탐지하고 적절한 치료로 골절 위험을 줄일 수 있는 복부 컴퓨터 단층 촬영 영상의 잠재력을 제시할 수 있습니다.

가전제품 소비자의 Channel Equity에 관한 탐색적 연구 (An Exploratory Study on Channel Equity of Electronic Goods)

  • 서용구;이은경
    • 마케팅과학연구
    • /
    • 제18권3호
    • /
    • pp.1-25
    • /
    • 2008
  • 본 연구는 가전제품 소매채널에 관한 소비자의 선호 및 이용행태를 조사하고 가전제품 구매 채널 현황과 소비자들의 점포선택과 만족도를 분석하여 소비자가 특정 채널에 대하여 가지고 있는 소위 channel equity에 대하여 탐색적으로 접근 하고자 한다. 분석결과 가전제품 멀티채널 쇼핑환경은 소비자로 하여금 채널별로 차별화된 구매 패턴과 쇼핑동기를 만들어 주고 있었다. 백화점이나 대리점은 품질의 우수성과 A/S 측면에서 우세하며 대형할인점, 양판점, TV홈쇼핑, 인터넷쇼핑몰, 전자제품 판매 상가는 가격적인 측면이 경쟁 우위로 조사되었다. 채널별 소비자 만족도에 있어서는 애프터서비스가 잘되고 있는 백화점이나 대리점 등이 만족도가 상대적으로 높은 소매 채널임을 알 수 있다. 채널 에퀴티의 구성요인은 가격 경쟁력과 비교구매, 이용편리성, A/S, 판매원의 전문성, 배송의 신속성, 제품 검색용이, 판매원의 친절성, 매장의 쾌적성, 교통 편리성 등을 들 수 있다. 백화점의 경우 거의 모든 요소에서 가장 높은 만족도를 가지고 있어 채널 에퀴티가 높게 평가되었다. 인터넷 쇼핑몰은 제품을 쉽게 검색할 수 있다는 점에서 우위에 있으며 TV홈쇼핑의 경우는 비교구매와 가격경쟁력에서 우위가 있음을 알 수 있었고 채널별로 채널 에퀴티를 구성하는 요인들의 상대적 비중은 매우 달랐다. 본 연구에서는 채널 만족도를 평가한 후 에퀴티 포트폴리오와 채널별 에퀴티 구성 결과를 제시하고 있으나 향후 채널 에퀴티의 개념과 관리 툴에 대한 본격적인 연구가 요망된다.

  • PDF

머신러닝과 KSCA를 활용한 디지털 사진의 색 분석 -한국 자연 풍경 낮과 밤 사진을 중심으로- (Color Analyses on Digital Photos Using Machine Learning and KSCA - Focusing on Korean Natural Daytime/nighttime Scenery -)

  • 권희은;구자준
    • 트랜스-
    • /
    • 제12권
    • /
    • pp.51-79
    • /
    • 2022
  • 본 연구에서는 색채 계획을 통해 콘텐츠를 제작할 때 참고할 만한 색을 도출하는 방법을 모색하기 위하여 진행되었다. 대상이 된 이미지는 한국 내의 자연풍광을 다룬 사진들로 머신러닝을 활용해 낮과 밤이 어떤 색으로 표현되는지 알아보고, KSCA를 통해 색 빈도를 도출하여 두 결과를 비교, 분석하였다. 낮과 밤 사진의 색을 머신러닝으로 구분한 결과, 51~100%로 구분했을 때, 낮의 색의 영역이 밤의 색보다 2.45배가량 더 많았다. 낮 class의 색은 white를 중심으로, 밤 class의 색은 black을 중심으로 명도에 따라 분포하였다. 낮 class 70%이상의 색이 647, 밤 class 70% 이상의 색이 252, 나머지(31-69%)가 101개로서 중간 영역의 색의 수는 적고 낮과 밤으로 비교적 뚜렷하게 구분되었다. 낮과 밤 class의 색 분포 결과를 통해 명도로 구분되는 두 class의 경계 색채값이 무엇인지 확인할 수 있었다. KSCA를 활용해 디지털 사진의 빈도를 분석한 결과는 전체적으로 밝은 낮 사진에서는 황색, 어두운 밤 사진에서는 청색 위주의 색이 표현되었음을 보여주었다. 낮 사진 빈도에서는 상위 40%에 해당하는 색이 거의 무채색에 가까울 정도로 채도가 낮았다. 또 white & black에 가까운 색이 가장 높은 빈도를 보여 명도차가 크다는 것을 알 수 있었다. 밤 사진의 빈도를 보면 상위 50% 가량 되는 색이 명도 2(먼셀 기호)에 해당하는 어두운 색이다. 그에 비해 빈도 중위권(50~80%)의 명도는 상대적으로 조금 높고(명도 3-4), 하위 20%에서는 여러 색들의 명도차가 크다. 난색들은 빈도 하위 8% 이내에서 간헐적으로 볼 수 있었다. 배색띠를 보았을 때, 전체적으로 남색을 위주로 조화로운 배색을 이루고 있었다. 본 연구의 색의 분포와 빈도의 결과값은 한국 내의 자연 풍경에 관한 디지털 디자인의 색채 계획에 참고 자료로 활용될 수 있을 것이다. 또한 색 분포를 나눈 결과는 해당색이 특정 디자인의 주조색이나 배경색으로 사용될 경우에 두 class 중 어느 쪽에 더 가까운 색인지에 대해 참고사항이 될 수 있을 것이며, 분석 이미지들을 몇 가지 class로 나눈다면, 각 class의 색 분포의 특성에 따라 분석 이미지에 사용되지 않은 색도 어느 class에 얼마큼 더 가까운 이미지인지 도출할 수 있을 것이다.

한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발 (DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA)

  • 박만배
    • 대한교통학회:학술대회논문집
    • /
    • 대한교통학회 1995년도 제27회 학술발표회
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • 제20권2호
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

광주시(光州市) 의료시설(醫療施設)의 입지(立地)와 주민(住民)의 효율적(效率的) 이용(利用) (The Location of Medical Facilities and Its Inhabitants' Efficient Utilization in Kwangju City)

  • 전경숙
    • 한국지역지리학회지
    • /
    • 제3권2호
    • /
    • pp.163-193
    • /
    • 1997
  • 복지사회를 지향하는 오늘날, 건강 중진에 직접 관계되는 의료시설의 접근성 문제는 주요 과제이다. 특히 삶의 질이라는 측면에서 질병의 치료 외에 건강진단, 예방과 회복, 요양 및 응급서비스의 비중이 커지고, 인구의 노령화 현상이 진전되면서 의료시설의 효율적인 입지가 주 관심사로 대두되고 있다. 의료시설은 주민의 생존과 직접 관계되는 기본적이고도 필수적인 중심시설로, 지역 주민은 균등한 혜택을 받을 수 있어야 한다. 이를 실현시키기 위해서는 기본적으로는 효율성과 평등성을 기반으로 1차 진료기관이 균등 분포해야 한다. 이에 본 연구에서는, 광주시를 사례지역으로 선정하여 의료시설의 입지와 그에 대한 주민의 효율적 이용에 관하여 분석하였다. 분석에 있어서는 통계자료와 기존의 연구 성과 외에 설문 및 현지조사 자료를 기반으로 시설 측면과 이용자 측면을 동시에 고찰하였다. 우선 의료 환경의 변화 및 의료시설의 변화 과정을 고찰하고, 이어서 의료시설의 유형별 입지 특성과 주민의 분포 특성을 고려한 지역별 의료수준을 분석하였다. 그리고 유형별 의료시설의 이용행태와 그 요인을 구명한 후, 마지막으로 장래 이용 유형의 예측과 문제지역의 추출, 나아가서는 시설의 합리적인 입지와 경영 방향을 제시하였다. 본 연구 결과는, 앞으로 신설될 의료시설의 적정 입지에 관한 기본 자료로서는 물론 지역 주민의 불평등성 해소라는 응용적 측면에서 의의를 지닌다.

  • PDF