• Title/Summary/Keyword: dynamic capability

Search Result 793, Processing Time 0.027 seconds

A Study of Pervasive Roaming Services with Security Management Framework (퍼베이시브 로밍 서비스를 위한 보안 관리 프레임워크)

  • Kim, Gwan-Yeon;Hwang, Zi-On;Kim, Yong;Uhm, Yoon-Sik;Park, Se-Hyun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.17 no.4
    • /
    • pp.115-129
    • /
    • 2007
  • The ubiquitous and autonomic computing environments is open and dynamic providing the universal wireless access through seamless integration of software and system architectures. The ubiquitous computing have to offer the user-centric pervasive services according to the wireless access. Therefore the roaming services with the predefined security associations among all of the mobile devices in various networks is especially complex and difficult. Furthermore, there has been little study of security coordination for realistic autonomic system capable of authenticating users with different kinds of user interfaces, efficient context modeling with user profiles on Smart Cards, and providing pervasive access service by setting roaming agreements with a variety of wireless network operators. This paper proposes a Roaming Coordinator-based security management framework that supports the capability of interoperator roaming with the pervasive security services among the push service based network domains. Compared to traditional mobile systems in which a Universal Subscriber Identity Module(USIM) is dedicated to one service domain only, our proposed system with Roaming Coordinator is more open, secure, and easy to update for security services throughout the different network domains such as public wireless local area networks(PWLANs), 3G cellular networks and wireless metropolitan area networks(WMANs).

Damage Estimation for Offshore Tubular Members Under Quasi-Static Loading (준정적하중(準靜的荷重)을 받는 해양구조물(海洋構造物)의 원통부재(圓筒部材)에 대한 손상예측(損傷豫測))

  • Paik, Jeom-K.;Shin, Byung-C.;Kim, Chang-Y.
    • Bulletin of the Society of Naval Architects of Korea
    • /
    • v.26 no.4
    • /
    • pp.81-93
    • /
    • 1989
  • The present study attempts to develop the theoretical model for the damage estimation of offshore tubular members which are subjected to the accidental impact loads due to collision, falling objects and so on. For the reasons of the simplicity of the problem being considered, however, this paper postulates that the accidental load can be approximated to be the quasi-static one, in which dynamic effects are negelcted. Based upon the theoretical and experimental results which are obtained from the present study as well as the existing literature, the load-displacement relations taking the interaction effect between the local denting and the global bending deformation into account are presented in the explicit form when the concentrated lateral load acts on the tubular member whose end condition is supposed to be rotation ally free and axially restrained, in which membrane forces develop. Thus, the practical estimation of damage deformation for the local denting and the global bending damage of tubular members against the accidental loads is possible and also the collision absorption capability of the member can be calculated by performing the integration of the area below the given load-displacement curves, provided that all the energy is dissipated to the deforming the member itself.

  • PDF

Introduction of Two-region Model for Simulating Long-Term Erosion of Bentonite Buffer (벤토나이트 완충재 장기 침식을 모사하기 위한 Two-region 모델 소개)

  • Jaewon Lee;Jung-Woo Kim
    • Tunnel and Underground Space
    • /
    • v.33 no.4
    • /
    • pp.228-243
    • /
    • 2023
  • Bentonite is widely recognized and utilized as a buffer material in high-level radioactive waste repositories, mainly due to its favorable characteristics such as swelling capability and low permeability. Bentonite buffers play an important role in ensuring the safe disposal of radioactive waste by providing a low permeability barrier and effectively preventing the migration of radionuclides into the surrounding rock. However, the long-term performance of bentonite buffers still remains a subject of ongoing research, and one of the main concerns is the erosion of the buffer induced by swelling and groundwater flow. The erosion of the bentonite buffer can significantly impact repository safety by compromising the integrity of buffer and leading to the formation of colloids that may facilitate the transport of radionuclides through groundwater, consequently elevating the risk of radionuclide migration. Therefore, it is very important to numerically quantify the erosion of bentonite buffer to evaluate the long-term performance of bentonite buffer, which is crucial for the safety assessment of high-level radioactive waste disposal. In this technical note, Two-region model is introduced, a proposed model to simulate the erosion behavior of bentonite based on a dynamic bentonite diffusion model, and quantitative evaluation is conducted for the bentonite buffer erosion with this model.

Optimal Mesh Size in Three-Dimensional Arbitrary Lagrangian-Eulerian Method of Free-air Explosions (3차원 Arbitrary Lagrangian-Eulerian 기법을 사용한 자유 대기 중 폭발 해석의 최적 격자망 크기 산정)

  • Yena Lee;Tae Hee Lee;Dawon Park;Youngjun Choi;Jung-Wuk Hong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.6
    • /
    • pp.355-364
    • /
    • 2023
  • The arbitrary Lagrangian-Eulerian (ALE) method has been extensively researched owing to its capability to accurately predict the propagation of blast shock waves. Although the use of the ALE method for dynamic analysis can produce unreliable results depending on the mesh size of the finite element, few studies have explored the relationship between the mesh size for the air domain and the accuracy of numerical analysis. In this study, we propose a procedure to calculate the optimal mesh size based on the mean squared error between the maximum blast pressure values obtained from numerical simulations and experiments. Furthermore, we analyze the relationship between the weight of explosive material (TNT) and the optimal mesh size of the air domain. The findings from this study can contribute to estimating the optimal mesh size in blast simulations with various explosion weights and promote the development of advanced blast numerical analysis models.

Intelligent Transportation System (ITS) research optimized for autonomous driving using edge computing (엣지 컴퓨팅을 이용하여 자율주행에 최적화된 지능형 교통 시스템 연구(ITS))

  • Sunghyuck Hong
    • Advanced Industrial SCIence
    • /
    • v.3 no.1
    • /
    • pp.23-29
    • /
    • 2024
  • In this scholarly investigation, the focus is placed on the transformative potential of edge computing in enhancing Intelligent Transportation Systems (ITS) for the facilitation of autonomous driving. The intrinsic capability of edge computing to process voluminous datasets locally and in a real-time manner is identified as paramount in meeting the exigent requirements of autonomous vehicles, encompassing expedited decision-making processes and the bolstering of safety protocols. This inquiry delves into the synergy between edge computing and extant ITS infrastructures, elucidating the manner in which localized data processing can substantially diminish latency, thereby augmenting the responsiveness of autonomous vehicles. Further, the study scrutinizes the deployment of edge servers, an array of sensors, and Vehicle-to-Everything (V2X) communication technologies, positing these elements as constituents of a robust framework designed to support instantaneous traffic management, collision avoidance mechanisms, and the dynamic optimization of vehicular routes. Moreover, this research addresses the principal challenges encountered in the incorporation of edge computing within ITS, including issues related to security, the integration of data, and the scalability of systems. It proffers insights into viable solutions and delineates directions for future scholarly inquiry.

Empirical Analysis of Accelerator Investment Determinants Based on Business Model Innovation Framework (비즈니스 모델 혁신 프레임워크 기반의 액셀러레이터 투자결정요인 실증 분석)

  • Jung, Mun-Su;Kim, Eun-Hee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.253-270
    • /
    • 2023
  • Research on investment determinants of accelerators, which are attracting attention by greatly improving the survival rate of startups by providing professional incubation and investment to startups at the same time, is gradually expanding. However, previous studies do not have a theoretical basis in developing investment determinants in the early stages, and they use factors of angel investors or venture capital, which are similar investors, and are still in the stage of analyzing importance and priority through empirical research. Therefore, this study verified for the first time in Korea the discrimination and effectiveness of investment determinants using accelerator investment determinants developed based on the business model innovation framework in previous studies. To this end, we first set the criteria for success and failure of startup investment based on scale-up theory and conducted a survey of 22 investment experts from 14 accelerators in Korea, and secured valid data on a total of 97 startups, including 52 successful scale-up startups and 45 failed scale-up startups, were obtained and an independent sample t-test was conducted to verify the mean difference between these two groups by accelerator investment determinants. As a result of the analysis, it was confirmed that the investment determinants of accelerators based on business model innovation framework have considerable discrimination in finding successful startups and making investment decisions. In addition, as a result of analyzing manufacturing-related startups and service-related startups considering the characteristics of innovation by industry, manufacturing-related startups differed in business model, strategy, and dynamic capability factors, while service-related startups differed in dynamic capabilities. This study has great academic implications in that it verified the practical effectiveness of accelerator investment determinants derived based on business model innovation framework for the first time in Korea, and it has high practical value in that it can make effective investments by providing theoretical grounds and detailed information for investment decisions.

  • PDF

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF

Factors Influencing the Adoption of Location-Based Smartphone Applications: An Application of the Privacy Calculus Model (스마트폰 위치기반 어플리케이션의 이용의도에 영향을 미치는 요인: 프라이버시 계산 모형의 적용)

  • Cha, Hoon S.
    • Asia pacific journal of information systems
    • /
    • v.22 no.4
    • /
    • pp.7-29
    • /
    • 2012
  • Smartphone and its applications (i.e. apps) are increasingly penetrating consumer markets. According to a recent report from Korea Communications Commission, nearly 50% of mobile subscribers in South Korea are smartphone users that accounts for over 25 million people. In particular, the importance of smartphone has risen as a geospatially-aware device that provides various location-based services (LBS) equipped with GPS capability. The popular LBS include map and navigation, traffic and transportation updates, shopping and coupon services, and location-sensitive social network services. Overall, the emerging location-based smartphone apps (LBA) offer significant value by providing greater connectivity, personalization, and information and entertainment in a location-specific context. Conversely, the rapid growth of LBA and their benefits have been accompanied by concerns over the collection and dissemination of individual users' personal information through ongoing tracking of their location, identity, preferences, and social behaviors. The majority of LBA users tend to agree and consent to the LBA provider's terms and privacy policy on use of location data to get the immediate services. This tendency further increases the potential risks of unprotected exposure of personal information and serious invasion and breaches of individual privacy. To address the complex issues surrounding LBA particularly from the user's behavioral perspective, this study applied the privacy calculus model (PCM) to explore the factors that influence the adoption of LBA. According to PCM, consumers are engaged in a dynamic adjustment process in which privacy risks are weighted against benefits of information disclosure. Consistent with the principal notion of PCM, we investigated how individual users make a risk-benefit assessment under which personalized service and locatability act as benefit-side factors and information privacy risks act as a risk-side factor accompanying LBA adoption. In addition, we consider the moderating role of trust on the service providers in the prohibiting effects of privacy risks on user intention to adopt LBA. Further we include perceived ease of use and usefulness as additional constructs to examine whether the technology acceptance model (TAM) can be applied in the context of LBA adoption. The research model with ten (10) hypotheses was tested using data gathered from 98 respondents through a quasi-experimental survey method. During the survey, each participant was asked to navigate the website where the experimental simulation of a LBA allows the participant to purchase time-and-location sensitive discounted tickets for nearby stores. Structural equations modeling using partial least square validated the instrument and the proposed model. The results showed that six (6) out of ten (10) hypotheses were supported. On the subject of the core PCM, H2 (locatability ${\rightarrow}$ intention to use LBA) and H3 (privacy risks ${\rightarrow}$ intention to use LBA) were supported, while H1 (personalization ${\rightarrow}$ intention to use LBA) was not supported. Further, we could not any interaction effects (personalization X privacy risks, H4 & locatability X privacy risks, H5) on the intention to use LBA. In terms of privacy risks and trust, as mentioned above we found the significant negative influence from privacy risks on intention to use (H3), but positive influence from trust, which supported H6 (trust ${\rightarrow}$ intention to use LBA). The moderating effect of trust on the negative relationship between privacy risks and intention to use LBA was tested and confirmed by supporting H7 (privacy risks X trust ${\rightarrow}$ intention to use LBA). The two hypotheses regarding to the TAM, including H8 (perceived ease of use ${\rightarrow}$ perceived usefulness) and H9 (perceived ease of use ${\rightarrow}$ intention to use LBA) were supported; however, H10 (perceived effectiveness ${\rightarrow}$ intention to use LBA) was not supported. Results of this study offer the following key findings and implications. First the application of PCM was found to be a good analysis framework in the context of LBA adoption. Many of the hypotheses in the model were confirmed and the high value of $R^2$ (i.,e., 51%) indicated a good fit of the model. In particular, locatability and privacy risks are found to be the appropriate PCM-based antecedent variables. Second, the existence of moderating effect of trust on service provider suggests that the same marginal change in the level of privacy risks may differentially influence the intention to use LBA. That is, while the privacy risks increasingly become important social issues and will negatively influence the intention to use LBA, it is critical for LBA providers to build consumer trust and confidence to successfully mitigate this negative impact. Lastly, we could not find sufficient evidence that the intention to use LBA is influenced by perceived usefulness, which has been very well supported in most previous TAM research. This may suggest that more future research should examine the validity of applying TAM and further extend or modify it in the context of LBA or other similar smartphone apps.

  • PDF

Toward a Social Sciences Methodology for Electronic Survey Research on the Internet or Personal Computer check (사회과학 연구에 있어 인터넷 및 상업용 통신망을 이용한 전자설문 조사방법의 활용)

  • Hong Yong-Gee;Lee Hong-Gee;Chae Su-Kyung
    • Management & Information Systems Review
    • /
    • v.3
    • /
    • pp.287-316
    • /
    • 1999
  • Cyberspace permits us to more beyond traditional face-to-face, mail and telephone surveys, yet still to examine basic issues regarding the quality of data collection: sampling, questionnaire design, survey distribution, means of response, and database creation. This article address each of these issues by contrasting and comparing traditional survey methods(Paper-and-Pencil) with Internet or Personal Computer networks-mediated (Screen-and-Keyboard) survey methods also introduces researchers to this revolutionary and innovative tool and outlines a variety of practical methods for using the Internet or Personal Computer Networks. The revolution in telecommunications technology has fostered the rapid growth of the Internet all over the world. The Internet is a massive global network and comprising many national and international networks of interconnected computers. The Internet or Personal Computer Networks could be the comprehensive interactive tool that will facilitate the development of the skills. The Internet or Personal Computer Networks provides a virtual frontier to expand our access to information and to increase our knowledge and understanding of public opinion, political behavior, social trends and lifestyles through survey research. Comparable to other technological advancements, the Internet or Personal Computer Networks presents opportunities that will impact significantly on the process and quality of survey research now and in the twenty-first century. There are trade-offs between traditional and the Internet or Personal Computer Networks survey. The Internet or Personal Computer Networks is an important channel for obtaining information for target participants. The cost savings in time, efforts, and material were substantial. The use of the Internet or Personal Computer Networks survey tool will increase the quality of research environment. There are several limitations to the Internet or Personal Computer Network survey approach. It requires the researcher to be familiar with Internet navigation and E-mail, it is essential for this process. The use of Listserv and Newsgroup result in a biased sample of the population of corporate trainers. However, it is this group that participates in technology and is in the fore front of shaping the new organizations of interest, and therefore it consists of appropriate participants. If this survey method becomes popular and is too frequently used, potential respondents may become as annoyed with E-mail as the sometimes are with mail survey and junk mail. Being a member of the Listserv of Newsgroup may moderate that reaction. There is a need to determine efficient, effective ways for the researcher to strip identifiers from E-mail, so that respondents remain anonymous, while simultaneously blocking a respondent from responding to a particular survey instrument more than once. The optimum process would be on that is initiated by the researcher : simple, fast and inexpensive to administer and has credibility with respondents. This would protect the legitimacy of the sample and anonymity. Creating attractive Internet or Personal Computer Networks survey formats that build on the strengths of standardized structures but also capitalize on the dynamic and interactive capability of the medium. Without such innovations in survey design, it is difficult to imagine why potential survey respondents would use their time to answer questions. More must be done to create diverse and exciting ways of building an credibility between respondents and researchers on the Internet or Personal Computer Networks. We believe that the future of much exciting research is based in the Electronic survey research. The ability to communicate across distance, time, and national boundaries offers great possibilities for studying the ways in which technology and technological discourse are shaped. used, and disseminated ; the many recent doctoral dissertations that treat some aspect of electronic survey research testify to the increase focus on the Internet or Personal Computer Networks. Thus, scholars should begin a serious conversation about the methodological issues of conducting research In cyberspace. Of all the disciplines, Internet or Personal Computer Networks, emphasis on the relationship between technology and human communication, should take the lead in considering research in the cyberspace.

  • PDF