• 제목/요약/키워드: R&E network

검색결과 265건 처리시간 0.022초

스마트폰 위치기반 어플리케이션의 이용의도에 영향을 미치는 요인: 프라이버시 계산 모형의 적용 (Factors Influencing the Adoption of Location-Based Smartphone Applications: An Application of the Privacy Calculus Model)

  • 차훈상
    • Asia pacific journal of information systems
    • /
    • 제22권4호
    • /
    • pp.7-29
    • /
    • 2012
  • Smartphone and its applications (i.e. apps) are increasingly penetrating consumer markets. According to a recent report from Korea Communications Commission, nearly 50% of mobile subscribers in South Korea are smartphone users that accounts for over 25 million people. In particular, the importance of smartphone has risen as a geospatially-aware device that provides various location-based services (LBS) equipped with GPS capability. The popular LBS include map and navigation, traffic and transportation updates, shopping and coupon services, and location-sensitive social network services. Overall, the emerging location-based smartphone apps (LBA) offer significant value by providing greater connectivity, personalization, and information and entertainment in a location-specific context. Conversely, the rapid growth of LBA and their benefits have been accompanied by concerns over the collection and dissemination of individual users' personal information through ongoing tracking of their location, identity, preferences, and social behaviors. The majority of LBA users tend to agree and consent to the LBA provider's terms and privacy policy on use of location data to get the immediate services. This tendency further increases the potential risks of unprotected exposure of personal information and serious invasion and breaches of individual privacy. To address the complex issues surrounding LBA particularly from the user's behavioral perspective, this study applied the privacy calculus model (PCM) to explore the factors that influence the adoption of LBA. According to PCM, consumers are engaged in a dynamic adjustment process in which privacy risks are weighted against benefits of information disclosure. Consistent with the principal notion of PCM, we investigated how individual users make a risk-benefit assessment under which personalized service and locatability act as benefit-side factors and information privacy risks act as a risk-side factor accompanying LBA adoption. In addition, we consider the moderating role of trust on the service providers in the prohibiting effects of privacy risks on user intention to adopt LBA. Further we include perceived ease of use and usefulness as additional constructs to examine whether the technology acceptance model (TAM) can be applied in the context of LBA adoption. The research model with ten (10) hypotheses was tested using data gathered from 98 respondents through a quasi-experimental survey method. During the survey, each participant was asked to navigate the website where the experimental simulation of a LBA allows the participant to purchase time-and-location sensitive discounted tickets for nearby stores. Structural equations modeling using partial least square validated the instrument and the proposed model. The results showed that six (6) out of ten (10) hypotheses were supported. On the subject of the core PCM, H2 (locatability ${\rightarrow}$ intention to use LBA) and H3 (privacy risks ${\rightarrow}$ intention to use LBA) were supported, while H1 (personalization ${\rightarrow}$ intention to use LBA) was not supported. Further, we could not any interaction effects (personalization X privacy risks, H4 & locatability X privacy risks, H5) on the intention to use LBA. In terms of privacy risks and trust, as mentioned above we found the significant negative influence from privacy risks on intention to use (H3), but positive influence from trust, which supported H6 (trust ${\rightarrow}$ intention to use LBA). The moderating effect of trust on the negative relationship between privacy risks and intention to use LBA was tested and confirmed by supporting H7 (privacy risks X trust ${\rightarrow}$ intention to use LBA). The two hypotheses regarding to the TAM, including H8 (perceived ease of use ${\rightarrow}$ perceived usefulness) and H9 (perceived ease of use ${\rightarrow}$ intention to use LBA) were supported; however, H10 (perceived effectiveness ${\rightarrow}$ intention to use LBA) was not supported. Results of this study offer the following key findings and implications. First the application of PCM was found to be a good analysis framework in the context of LBA adoption. Many of the hypotheses in the model were confirmed and the high value of $R^2$ (i.,e., 51%) indicated a good fit of the model. In particular, locatability and privacy risks are found to be the appropriate PCM-based antecedent variables. Second, the existence of moderating effect of trust on service provider suggests that the same marginal change in the level of privacy risks may differentially influence the intention to use LBA. That is, while the privacy risks increasingly become important social issues and will negatively influence the intention to use LBA, it is critical for LBA providers to build consumer trust and confidence to successfully mitigate this negative impact. Lastly, we could not find sufficient evidence that the intention to use LBA is influenced by perceived usefulness, which has been very well supported in most previous TAM research. This may suggest that more future research should examine the validity of applying TAM and further extend or modify it in the context of LBA or other similar smartphone apps.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

국제법상 우주폐기물감축 연성법의 역할에 관한 연구 (The Role of the Soft Law for Space Debris Mitigation in International Law)

  • 김한택
    • 항공우주정책ㆍ법학회지
    • /
    • 제30권2호
    • /
    • pp.469-497
    • /
    • 2015
  • 이 논문은 국제법상 우주폐기물감축 연성법(soft law)의 역할에 관한 것으로 경성법(hard law)인 우주관련조약들인 1967년 우주조약, 1968년 구조협정, 1972년 책임협약, 1975년 등록협약, 1979년 달조약 등 5개 조약은 우주폐기물을 직접적으로 다루는 조약이 아니기 때문에 제외하였다. 특히 1967년 우주조약은 제 9조에서 '유해한 오염'이나 '유해한 방해', '환경의 불리한 변화'라는 용어만 사용할 뿐 이것들에 대한 정의가 없으며, 1979년 달조약 역시 우주조약과 마찬가지로 제7조에 '유해한 오염', '불리한 변화', '환경의 방해', '유해한 영향' 등과 같은 중요한 개념에 대한 정의를 내리지 못하고 있다. 실제로 1978년 "Cosmos 954 사건"에서 가해국인 소련과 피해국이 캐나다가 1967년 우주조약과 1972년 책임협약의 당사국임에도 불구하고 우주관련조약을 원용하지 않고 의정서를 체결하여 해결한 점이 조약의 존재에 대한 의구심을 갖게 하였다. 국가들이 국제환경법이나 국제경제법 분야에서 조약체결이나 보충의정서 체결이 힘든 경우 연성법을 채택하여 문제를 해결하던 방식이 이제는 우주법에도 적용되고 있다. 우주폐기물감축에 관한 연성법으로는 'IADC의 가이드라인', '우주폐기물감축 가이드라인', 우주활동국제행동규범, '우주활동의 장기지속가능성을 위한 가이드라인' 등을 들 수 있다. 많은 학자들이 이러한 결의 속에 나타난 몇 개의 원칙들은 국제관습법을 표명하고 있다고 주장한다. 예를 들어 "우주에서의 핵원료사용에 관한 원칙(NPS원칙)"에서 우주에서의 NPS의 통지나 사용, 책임에 관한 규칙들은 법의 일반적 성격에 대한 기초를 형성하는 것으로 간주되는 '근본적으로 규범창설적 성격'(a fundamentally norm-creating character)을 지닌 것으로 볼 수 있는데, 국가관행이 이를 더욱 증명해주고 있다. 또한 이러한 연성법은 기존의 국제관습법이 성문화되는 과정에서 정확성을 제공하여 도움을 주거나 새로운 국제관습법보다 선행하여 이들이 형성되는데 도움을 주기도 한다. 1974년 11월 12일 UN총회가 총회의 '선언'(declaration)과 '결의'(resolution)는 국제법의 발전에 반영될 수 있는 한 방법으로서 국제사법재판소(ICJ)에 의해서 고려되어야 한다고 한 권고는 그런 의미에서 매우 중요하다. 그러나 E. R. C, van Bogaert도 지적한 바와 같이 이러한 결의나 권고, 가이드라인 등 연성법에 관한 법적가치는 과장되어서는 안 되고, 총회는 권고를 표결할 권한을 갖고 있지만 강제적인 법적규칙을 부과하는 것은 불가능하다. 이러한 권고는 컨센서스(consensus)의 표명으로 보는 것이 가능 하지만 아직 불완전한 법이라는 것이다. 법적 견지로 본다면 연성법은 실제 조약과 동일시하는 것은 불가능하다. 따라서 우주폐기물 감축에 관한 연성법은 엄밀하게 말해서 법적 구속력은 없다. 다시 말해서 이것들은 국가들을 구속하는 법문서는 아니며, 현존하는 우주법에 관한 조약들의 관점에서 볼 때는 일종의 권고의 형태로써 우주관련조약들의 보충적 역할을 할 수 있을 뿐이라는 것이다. 그러므로 연성법의 한계를 넘어서기 위해서는 앞으로 우주폐기물감축 연성법을 바탕으로 우주법의 산실인 UN COPOUS가 법적 구속력 있는 조치를 마련할 것으로 기대해 본다.

역사기록물(Archives)의 항구적인 보존화 이용 : 보존전략과 디지털정보화 (Permanent Preservation and Use of Historical Archives : Preservation Issues Digitization of Historical Collection)

  • 이상민
    • 기록학연구
    • /
    • 제1호
    • /
    • pp.23-76
    • /
    • 2000
  • In this paper, I examined what have been researched and determined about preservation strategy and selection of preservation media in the western archival community. Archivists have primarily been concerned with 'preservation' and 'use' of archival materials worth of being preserved permanently. In the new information era, preservation and use of archival materials were faced with new challenge. Life expectancy of paper records was shortened due to acidification and brittleness of the modem papers. Also emergence of information technology affects the traditional way of preservation and use of archival materials. User expectations are becoming so high technology-oriented and so complicated as to make archivists act like information managers using computer technology rather than traditional archival handicraft. Preservation strategy plays an important role in archival management as well as information management. For a cost-effective management of archives and archival institutions, preservation strategy is a must. The preservation strategy encompasses all aspects of archival preservation process and practices, from selection of archives, appraisal, inventorying, arrangement, description, conservation, microfilming or digitization, archival buildings, and access service. Those archival functions should be considered in their relations to each other to ensure proper preservation of archival materials. In the integrated preservation strategy, 'preservation' and 'use' should be combined and fulfilled without sacrificing the other. Preservation strategy planning is essential to determine the policies of archives to preserve their holdings safe and provide people with a maximum access in most effective ways. Preservation microfilming is to ensure permanent preservation of information held in important archival materials. To do this, a detailed standardization has been developed to guarantee the permanence of microfilm as well as its product quality. Silver gelatin film can last up to 500 years in the optimum storage environment and the most viable option for permanent preservation media. ISO and ANIS developed such standards for the quality of microfilms and microfilming technology. Preservation microfilming guidelines was also developed to ensure effective archival management and picture quality of microfilms. It is essential to assess the need of preservation microfilming. Limit in resources always put a restraint on preservation management. Appraisal (and selection) of what to be preserved was the most important part of preservation microfilming. In addition, microfilms with standard quality can be scanned to produce quality digital images for instant use through internet. As information technology develops, archivists began to utilize information technology to make preservation easier and more economical, and to promote use of archival materials through computer communication network. Digitization was introduced to provide easy and universal access to unique archives, and its large capacity of preserving archival data seems very promising. However, digitization, i.e., transferring images of records to electronic codes, still, needs to be standardized. Digitized data are electronic records, and st present electronic records are very unstable and not to be preserved permanently. Digital media including optical disks materials have not been proved as reliable media for permanent preservation. Due to their chemical coating and physical character using light, they are not stable and can be preserved at best 100 years in the optimum storage environment. Most CD-R can last only 20 years. Furthermore, obsolescence of hardware and software makes hard to reproduce digital images made from earlier versions. Even if when reformatting is possible, the cost of refreshing or upgrading of digital images is very expensive and the very process has to be done at least every five to ten years. No standard for this obsolescence of hardware and software has come into being yet. In short, digital permanence is not a fact, but remains to be uncertain possibility. Archivists must consider in their preservation planning both risk of introducing new technology and promising possibility of new technology at the same time. In planning digitization of historical materials, archivists should incorporate planning for maintaining digitized images and reformatting them in the coming generations of new applications. Without the comprehensive planning, future use of the expensive digital images will become unavailable. And that is a loss of information, and a final failure of both 'preservation' and 'use' of archival materials. As peter Adelstein said, it is wise to be conservative when considerations of conservations are involved.

전염병의 경로 추적 및 예측을 위한 통합 정보 시스템 구현 (Implementation of integrated monitoring system for trace and path prediction of infectious disease)

  • 김은경;이석;변영태;이혁재;이택진
    • 인터넷정보학회논문지
    • /
    • 제14권5호
    • /
    • pp.69-76
    • /
    • 2013
  • 세계적으로 전파력과 병원성이 높은 신종인플루엔자, 조류독감 등과 같은 전염병이 증가하고 있다. 전염병이란 특정 병원체(pathogen)로 인하여 발생하는 질병으로 감염된 사람으로부터 감수성이 있는 숙주(사람)에게 감염되는 질환을 의미한다. 전염병의 병원체는 세균, 스피로헤타, 리케차, 바이러스, 진균, 기생충 등이 있으며, 호흡기계 질환, 위장관 질환, 간질환, 급성 열성 질환 등을 일으킨다. 전파 방법은 식품이나 식수, 곤충 매개, 호흡에 의한 병원체의 흡입, 다른 사람과의 접촉 등 다양한 경로를 통해 발생한다. 전 세계의 대부분 국가들은 전염병의 전파를 예측하고 대비하기 위해서 수학적 모델을 사용하고 있다. 하지만 과거와 달리 현대 사회는 지상과 지하 교통수단의 발달로 전염병의 전파 속도가 매우 복잡하고 빨라졌기 때문에 우리는 이를 예방하기 위한 대책 마련의 시간이 부족하다. 그러므로 전염병의 확산을 막기 위해서는 전염병의 전파 경로를 예측할 수 있는 시스템이 필요하다. 우리는 이러한 문제를 해결하기 위해서 전염병의 실시간 감시 및 관리를 위한 전염병의 감염 경로 추적 및 예측이 가능한 통합정보 시스템을 구현하였다. 이 논문에서는 전염병의 전파경로 예측에 관한 부분을 다루며, 이 시스템은 기존의 수학적 모델인 Susceptible - Infectious - Recovered (SIR) 모델을 기반으로 하였다. 이 모델의 특징은 교통수단인 버스, 기차, 승용차, 비행기를 포함시킴으로써, 도시내 뿐만 아니라 도시간의 교통수단을 이용한 이동으로 사람간의 접촉을 표현할 수 있다. 그리고 한국의 지리적 특성에 맞도록 실제 자료를 수정하였기 때문에 한국의 현실을 잘 반영할 수 있다. 또한 백신은 시간에 따라서 투여 지역과 양을 조절할 수 있기 때문에 사용자가 시뮬레이션을 통해서 어느 시점에서 어느 지역에 우선적으로 투여할지 백신을 컨트롤할 수 있다. 시뮬레이션은 몇가지 가정과 시나리오를 기반으로 한다. 그리고 통계청의 자료를 이용해서 인구 이동이 많은 주요 5개 도시인 서울, 인천국제공항, 강릉, 평창, 원주를 선정했다. 상기 도시들은 네트워크로 연결되어있으며 4가지의 교통수단들만 이용하여 전파된다고 가정하였다. 교통량은 국가통계포털에서 일일 교통량 자료를 입수하였으며, 각도시의 인구수는 통계청에서 통계자료를 입수하였다. 그리고 질병관리본부에서는 신종인플루엔자 A의 자료를 입수하였으며, 항공포털시스템에서는 항공 통계자료를 입수하였다. 이처럼 일일 교통량, 인구 통계, 신종인플루엔자 A 그리고 항공 통계자료는 한국의 지리적 특성에 맞도록 수정하여 현실에 가까운 가정과 시나리오를 바탕으로 하였다. 시뮬레이션은 신종인플루엔자 A가 인천공항에 발생하였을 때, 백신이 투여되지 않은 경우, 서울과 평창에 각각 백신이 투여된 경우의 3가지 시나리오에 대해서, 감염자가 피크인 날짜와 I (infectious)의 비율을 비교하였다. 그 결과 백신이 투여되지 않은 경우, 감염자가 피크인 날짜는 교통량이 가장 많은 서울에서 37일로 가장 빠르고, 교통량이 가장 적은 평창에서 43일로 가장 느렸다. I의 비율은 서울에서 가장 높았고, 평창에서 가장 낮았다. 서울에 백신이 투여된 경우, 감염자가 피크인 날짜는 서울이 37일로 가장 빨랐으며, 평창은 43일로 가장 느렸다. 그리고 I의 비율은 강릉에서 가장 높으며, 평창에서 가장 낮았다. 평창에 백신을 투여한 경우, 감염자가 피크인 날짜는 37일로 서울이 가장 빠르고 평창은 43일로 가장 느렸다. I의 비율은 강릉에서 가장 높았고, 평창에서는 가장 낮았다. 이 결과로부터 신종인플루엔자 A가 발생하면 각 도시는 교통량에 의해 영향을 받아 확산된다는 것을 확인할 수 있다. 따라서 전염병 발생시 전파 경로는 각 도시의 교통량에 따라서 달라지므로, 교통량의 분석을 통해서 전염병의 전파 경로를 추적하고 예측함으로써 전염병에 대한 대책이 가능할 것이다.