• Title/Summary/Keyword: integration step

Search Result 513, Processing Time 0.03 seconds

Vehicle-Bridge Interaction Analysis of Railway Bridges by Using Conventional Trains (기존선 철도차량을 이용한 철도교의 상호작용해석)

  • Cho, Eun Sang;Kim, Hee Ju;Hwang, Won Sup
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.1A
    • /
    • pp.31-43
    • /
    • 2009
  • In this study, the numerical method is presented, which can consider the various train types and can solve the equations of motion for a vehicle-bridge interaction analysis by non-iteration procedure through formulating the coupled equations of motion. The coupled equations of motion for the vehicle-bridge interaction are solved by the Newmark ${\beta}$ of a direct integration method, and by composing the effective stiffness matrix and the effective force vector according to a analysis step, those can be solved with the same manner of the solving procedure of equilibrium equations in static analysis. Also, the effective stiffness matrix is reconstructed by the Skyline method for increasing the analysis effectiveness. The Cholesky's matrix decomposition scheme is applied to the analysis procedure for minimizing the numerical errors that can be generated in directly calculating the inverse matrix. The equations of motion for the conventional trains are derived, and the numerical models of the conventional trains are idealized by a set of linear springs and dashpots with 16 degrees of freedom. The bridge models are simplified by the 3 dimensional space frame element which is based on the Euler-Bernoulli theory. The rail irregularities of vertical and lateral directions are generated by the PSD functions of the Federal Railroad Administration (FRA). The results of the vehicle-bridge interaction analysis are verified by the experimental results for the railway plate girder bridges of a span length with 12 m, 18 m, and the experimental and analytical data are applied to the low pass filtering scheme, and the basis frequency of the filtering is a 2 times of the 1st fundamental frequency of a bridge bending.

Design Information Management System Core Development Using Industry Foundation Classes (IFC를 이용한 설계정보관리시스템 핵심부 구축)

  • Lee Keun-hyung;Chin Sang-yoon;Kim Jae-jun
    • Korean Journal of Construction Engineering and Management
    • /
    • v.1 no.2 s.2
    • /
    • pp.98-107
    • /
    • 2000
  • Increased use of computers in AEC (Architecture, Engineering and Construction) has expanded the amount of information gained from CAD (Computer Aided Design), PMIS (Project Management Information System), Structural Analysis Program, and Scheduling Program as well as making it more complex. And the productivity of AEC industry is largely dependent on well management and efficient reuse of this information. Accordingly, such trend incited much research and development on ITC (Information Technology in Construction) and CIC (Computer Integrated Construction) to be conducted. In exemplifying such effort, many researchers studied and researched on IFC (Industry Foundation Classes) since its development by IAI (International Alliance for Interoperability) for the product based information sharing. However, in spite of some valuable outputs, these researches are yet in the preliminary stage and deal mainly with conceptual ideas and trial implementations. Research on unveiling the process of the IFC application development, the core of the Design Information management system, and its applicable plan still need be done. Thus, the purpose of this paper is to determine the technologies needed for Design Information management system using IFC, and to present the key roles and the process of the IFC application development and its applicable plan. This system play a role to integrate the architectural information and the structural information into the product model and to group many each product items with various levels and aspects. To make the process model, we defined two activities, 'Product Modeling', 'Application Development', at the initial level. Then we decomposed the Application Development activity into five activities, 'IFC Schema Compile', 'Class Compile', 'Make Project Database Schema', 'Development of Product Frameworker', 'Make Project Database'. These activities are carried out by C++ Compiler, CAD, ObjectStore, ST-Developer, and ST-ObjectStore. Finally, we proposed the applicable process with six stages, '3D Modeling', 'Creation of Product Information', 'Creation and Update of Database', 'Reformation of Model's Structure with Multiple Hierarchies', 'Integration of Drawings and Specifications', and 'Creation of Quantity Information'. The IFCs, including the other classes which are going to be updated and developed newly on the construction, civil/structure, and facility management, will be used by the experts through the internet distribution technologies including CORBA and DCOM.

  • PDF

FTA Negotiation Strategy and Politics in the Viewpoint of the Three-Dimensional Game Theory: Korea-EU FTA and EU-Japan EPA in Comparison (삼차원게임이론의 관점에서 바라 본 유럽연합의 FTA 협상 전략 및 정치: 한-EU FTA와 EU-일본 EPA의 비교를 중심으로)

  • Kim, Hyun-Jung
    • Journal of International Area Studies (JIAS)
    • /
    • v.22 no.2
    • /
    • pp.81-110
    • /
    • 2018
  • In this paper, we examined the regional economic integration, the trade negotiation strategy and bargaining power of the European Union through the logical structure of the three - dimensional game theory. In the three - dimensional game theory, the negotiator emphasized that the negotiation strategy of the triple side existed while simultaneously operating the game standing on the boundary of each side game, constrained from each direction or occasionally using the constraint as an opportunity. The study of three-dimensional game theory is aimed at organizing the process of coordinating opinions and meditating interests at the international level, regional level and member level by the regional union as a subject of negotiation. This study would compare and analyze the recently concluded EU-Japan EPA (Economic Partnership Agreement) negotiation process with the case of the EU FTA, and summarize the logic of the three-dimensional game theory applicable to the FTA of the regional economic partnership. Furthermore, the study would illustrate the strategies of the regional economic cooperatives to respond to negotiations. The area of trade policy at the EU level has already been completed by the exclusive power of the Union on areas where it is difficult to politicize with technical features. Moreover, the fact that the policy process at the Union level has not been revealed as a political issue, and that the public opinion process is a double-step approach. In conclusion, the EU's trade policy process constitutes a complicated and sophisticated process with the allocation of authority by various central organizations. The mechanism of negotiation is paradoxically simplified because of the common policy decision process and the structural characteristics of the trade zone, and the bargaining power at the community level is enhanced. As a result, the European Commission would function as a very strong negotiator in bilateral trade negotiations at the international level.

Characteristics of Carcass and Meat Yields of Fattening Pigs by Production Step (비육돈 생산단계에 따른 도체 및 부분육 생산 특성)

  • Kim, J.H.;Park, B.Y.;Yoo, Y.M.;Cho, S.H.;Kim, Y.K.;Lee, J.M.;Yun, H.J.;Kim, K.N.
    • Journal of Animal Science and Technology
    • /
    • v.44 no.6
    • /
    • pp.793-800
    • /
    • 2002
  • The characteristics of carcass and meat yields of fattening pigs by production steps were investigated with Landrace (LL, n=41), Yorkshire (YY, n=33), and Duroc (DD, n=30), $F_1$ (LY, n=25), the Crossbred of LYD (n=48). Duroc had more weight loss in carcasses weight than that of the other breed(p<0.05). Yorkshire and $F_1$ had higher production weight in retail cut than that of the other pure or crossbred. Carcass from Landrace and $F_1$ were significantly longer in length than the other breed(p<0.05). $F_1$ produced wider carcass than the other breed. Carcass thickness at aitch bone was higher for $F_1$ and Crossbred than the other breed(p<0.05). Landrace, Yorkshire and $F_1$ produced more loin and tenderloin in weight than the other breed (p<0.05). Yorkshire and $F_1$ produced more picnic shoulder when compared to the other breed. The hind legs produced from Yorkshire and $F_1$ were higher in weight and the fore legs produced from the Crossbred were higher in weight. Duroc produced the lowest weight of belly among the breeds. The acceptance level of loin were extremely low for all breed. Landrace had the highest acceptance level for tenderloin. Yorkshire had the highest acceptance level for picnic shoulder and ham when evaluated by export standard of Japan. In conclusion, The introduction of pure breed and establishment of mating steps are necessary to produce highly accepted pork with high acceptance in carcass and meat yields.

A Study on the Neumann-Kelvin Problem of the Wave Resistance (조파저항에서의 Neumann-Kelvin 문제에 대한 연구)

  • 김인철
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.21 no.2
    • /
    • pp.131-136
    • /
    • 1985
  • The calculation of the resulting fluid motion is an important problem of ship hydrodynamics. For a partially immersed body the condition of constant pressure at the free surface can be linearized. The resulting linear boundary-value problem for the velocity potential is the Neumann-Kelvin problem. The two-dimensional Neumann-Kelvin problem is studied for the half-immersed circular cylinder by Ursell. Maruo introduced a slender body approach to simplify the Neumann-Kelvin problem in such a way that the integral equation which determines the singularity distribution over the hull surface can be solved by a marching procedure of step by step integration starting at bow. In the present pater for the two-dimensional Neumann-Kelvin problem, it has been suggested that any solution of the problem must have singularities in the corners between the body surface and free surface. There can be infinitely many solutions depending on the singularities in the coroners.

  • PDF

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.

A Comparative Study on the Characteristics of Cultural Heritage in China and Vietnam (중국과 베트남의 문화유산 특성 비교 연구)

  • Shin, Hyun-Sil;Jun, Da-Seul
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.40 no.2
    • /
    • pp.34-43
    • /
    • 2022
  • This study compared the characteristics of cultural heritage in China and Vietnam, which have developed in the relationship of mutual geopolitical and cultural influence in history, and the following conclusions were made. First, the definition of cultural heritage in China and Vietnam has similar meanings in both countries. In the case of cultural heritage classification, both countries introduced the legal concept of intangible cultural heritage through UNESCO, and have similarities in terms of intangible cultural heritage. Second, while China has separate laws for managing tangible and intangible cultural heritages, Vietnam integrally manages the two types of cultural heritages under a single law. Vietnam has a slower introduction of the concept of cultural heritage than China, but it shows high integration in terms of system. Third, cultural heritages in both China and Vietnam are graded, which is applied differently depending on the type of heritage. The designation method has a similarity in which the two countries have a vertical structure and pass through steps. By restoring the value of heritage and complementing integrity through such a step-by-step review, balanced development across the country is being sought through tourism to enjoy heritage and create economic effects. Fourth, it was confirmed that the cultural heritage management organization has a central government management agency in both countries, but in China, the authority of local governments is higher than that of Vietnam. In addition, unlike Vietnam, where tangible and intangible cultural heritage are managed by an integrated institution, China had a separate institution in charge of intangible cultural heritage. Fifth, China is establishing a conservation management policy focusing on sustainability that harmonizes the protection and utilization of heritage. Vietnam is making efforts to integrate the contents and spirit of the agreement into laws, programs, and projects related to cultural heritage, especially intangible heritage and economic and social as a whole. However, it is still dependent on the influence of international organizations. Sixth, China and Vietnam are now paying attention to intangible heritage recently introduced, breaking away from the cultural heritage protection policy centered on tangible heritage. In addition, they aim to unite the people through cultural heritage and achieve the nation's unified policy goals. The two countries need to use intangible heritage as an efficient means of preserving local communities or regions. A cultural heritage preservation network should be established for each subject that can integrate the components of intangible heritage into one unit to lay the foundation for the enjoyment of the people. This study has limitations as a research stage comparing the cultural heritage system and preservation management status in China and Vietnam, and the characteristic comparison of cultural heritage policies by type remains a future research task.

An Examination of Knowledge Sourcing Strategies Effects on Corporate Performance in Small Enterprises (소규모 기업에 있어서 지식소싱 전략이 기업성과에 미치는 영향 고찰)

  • Choi, Byoung-Gu
    • Asia pacific journal of information systems
    • /
    • v.18 no.4
    • /
    • pp.57-81
    • /
    • 2008
  • Knowledge is an essential strategic weapon for sustaining competitive advantage and is the key determinant for organizational growth. When knowledge is shared and disseminated throughout the organization, it increases an organization's value by providing the ability to respond to new and unusual situations. The growing importance of knowledge as a critical resource has forced executives to pay attention to their organizational knowledge. Organizations are increasingly undertaking knowledge management initiatives and making significant investments. Knowledge sourcing is considered as the first important step in effective knowledge management. Most firms continue to make an effort to realize the benefits of knowledge management by using various knowledge sources effectively. Appropriate knowledge sourcing strategies enable organizations to create, acquire, and access knowledge in a timely manner by reducing search and transfer costs, which result in better firm performance. In response, the knowledge management literature has devoted substantial attention to the analysis of knowledge sourcing strategies. Many studies have categorized knowledge sourcing strategies into intemal- and external-oriented. Internal-oriented sourcing strategy attempts to increase firm performance by integrating knowledge within the boundary of the firm. On the contrary, external-oriented strategy attempts to bring knowledge in from outside sources via either acquisition or imitation, and then to transfer that knowledge across to the organization. However, the extant literature on knowledge sourcing strategies focuses primarily on large organizations. Although many studies have clearly highlighted major differences between large and small firms and the need to adopt different strategies for different firm sizes, scant attention has been given to analyzing how knowledge sourcing strategies affect firm performance in small firms and what are the differences between small and large firms in the patterns of knowledge sourcing strategies adoption. This study attempts to advance the current literature by examining the impact of knowledge sourcing strategies on small firm performance from a holistic perspective. By drawing on knowledge based theory from organization science and complementarity theory from the economics literature, this paper is motivated by the following questions: (1) what are the adoption patterns of different knowledge sourcing strategies in small firms (i,e., what sourcing strategies should be adopted and which sourcing strategies work well together in small firms)?; and (2) what are the performance implications of these adoption patterns? In order to answer the questions, this study developed three hypotheses. First hypothesis based on knowledge based theory is that internal-oriented knowledge sourcing is positively associated with small firm performance. Second hypothesis developed on the basis of knowledge based theory is that external-oriented knowledge sourcing is positively associated with small firm performance. The third one based on complementarity theory is that pursuing both internal- and external-oriented knowledge sourcing simultaneously is negatively or less positively associated with small firm performance. As a sampling frame, 700 firms were identified from the Annual Corporation Report in Korea. Survey questionnaires were mailed to owners or executives who were most erudite about the firm s knowledge sourcing strategies and performance. A total of 188 companies replied, yielding a response rate of 26.8%. Due to incomplete data, 12 responses were eliminated, leaving 176 responses for the final analysis. Since all independent variables were measured using continuous variables, supermodularity function was used to test the hypotheses based on the cross partial derivative of payoff function. The results indicated no significant impact of internal-oriented sourcing strategies while positive impact of external-oriented sourcing strategy on small firm performance. This intriguing result could be explained on the basis of various resource and capital constraints of small firms. Small firms typically have restricted financial and human resources. They do not have enough assets to always develop knowledge internally. Another possible explanation is competency traps or core rigidities. Building up a knowledge base based on internal knowledge creates core competences, but at the same time, excessive internal focused knowledge exploration leads to behaviors blind to other knowledge. Interestingly, this study found that Internal- and external-oriented knowledge sourcing strategies had a substitutive relationship, which was inconsistent with previous studies that suggested complementary relationship between them. This result might be explained using organizational identification theory. Internal organizational members may perceive external knowledge as a threat, and tend to ignore knowledge from external sources because they prefer to maintain their own knowledge, legitimacy, and homogeneous attitudes. Therefore, integrating knowledge from internal and external sources might not be effective, resulting in failure of improvements of firm performance. Another possible explanation is small firms resource and capital constraints and lack of management expertise and absorptive capacity. Although the integration of different knowledge sources is critical, high levels of knowledge sourcing in many areas are quite expensive and so are often unrealistic for small enterprises. This study provides several implications for research as well as practice. First this study extends the existing knowledge by examining the substitutability (and complementarity) of knowledge sourcing strategies. Most prior studies have tended to investigate the independent effects of these strategies on performance without considering their combined impacts. Furthermore, this study tests complementarity based on the productivity approach that has been considered as a definitive test method for complementarity. Second, this study sheds new light on knowledge management research by identifying the relationship between knowledge sourcing strategies and small firm performance. Most current literature has insisted complementary relationship between knowledge sourcing strategies on the basis of data from large firms. Contrary to the conventional wisdom, this study identifies substitutive relationship between knowledge sourcing strategies using data from small firms. Third, implications for practice highlight that managers of small firms should focus on knowledge sourcing from external-oriented strategies. Moreover, adoption of both sourcing strategies simultaneousiy impedes small firm performance.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

The Mediating Role of Perceived Risk in the Relationships Between Enduring Product Involvement and Trust Expectation (지속적 제품관여도와 소비자 요구신뢰수준 간의 영향관계: 인지된 위험의 매개 역할에 대한 실증분석을 중심으로)

  • Hong, Ilyoo B.;Kim, Taeha;Cha, Hoon S.
    • Asia pacific journal of information systems
    • /
    • v.23 no.4
    • /
    • pp.103-128
    • /
    • 2013
  • When a consumer needs a product or service and multiple sellers are available online, the process of selecting a seller to buy online from is complex since the process involves many behavioral dimensions that have to be taken into account. As a part of this selection process, consumers may set minimum trust expectation that can be used to screen out less trustworthy sellers. In the previous research, the level of consumers' trust expectation has been anchored on two important factors: product involvement and perceived risk. Product involvement refers to the extent to which a consumer perceives a specific product important. Thus, the higher product involvement may result in the higher trust expectation in sellers. On the other hand, other related studies found that when consumers perceived a higher level of risk (e.g., credit card fraud risk), they set higher trust expectation as well. While abundant research exists addressing the relationship between product involvement and perceived risk, little attention has been paid to the integrative view of the link between the two constructs and their impacts on the trust expectation. The present paper is a step toward filling this research gap. The purpose of this paper is to understand the process by which a consumer chooses an online merchant by examining the relationships among product involvement, perceived risk, trust expectation, and intention to buy from an e-tailer. We specifically focus on the mediating role of perceived risk in the relationships between enduring product involvement and the trust expectation. That is, we question whether product involvement affects the trust expectation directly without mediation or indirectly mediated by perceived risk. The research model with four hypotheses was initially tested using data gathered from 635 respondents through an online survey method. The structural equation modeling technique with partial least square was used to validate the instrument and the proposed model. The results showed that three out of the four hypotheses formulated were supported. First, we found that the intention to buy from a digital storefront is positively and significantly influenced by the trust expectation, providing support for H4 (trust expectation ${\rightarrow}$ purchase intention). Second, perceived risk was found to be a strong predictor of trust expectation, supporting H2 as well (perceived risk ${\rightarrow}$ trust expectation). Third, we did not find any evidence of direct influence of product involvement, which caused H3 to be rejected (product involvement ${\rightarrow}$ trust expectation). Finally, we found significant positive relationship between product involvement and perceived risk (H1: product involvement ${\rightarrow}$ perceived risk), which suggests that the possibility of complete mediation of perceived risk in the relationship between enduring product involvement and the trust expectation. As a result, we conducted an additional test for the mediation effect by comparing the original model with the revised model without the mediator variable of perceived risk. Indeed, we found that there exists a strong influence of product involvement on the trust expectation (by intentionally eliminating the variable of perceived risk) that was suppressed (i.e., mediated) by the perceived risk in the original model. The Sobel test statistically confirmed the complete mediation effect. Results of this study offer the following key findings. First, enduring product involvement is positively related to perceived risk, implying that the higher a consumer is enduringly involved with a given product, the greater risk he or she is likely to perceive with regards to the online purchase of the product. Second, perceived risk is positively related to trust expectation. A consumer with great risk perceptions concerning the online purchase is likely to buy from a highly trustworthy online merchant, thereby mitigating potential risks. Finally, product involvement was found to have no direct influence on trust expectation, but the relationship between the two constructs was indirect and mediated by the perceived risk. This is perhaps an important theoretical integration of two separate streams of literature on product involvement and perceived risk. The present research also provides useful implications for practitioners as well as academicians. First, one implication for practicing managers in online retail stores is that they should invest in reducing the perceived risk of consumers in order to lower down the trust expectation and thus increasing the consumer's intention to purchase products or services. Second, an academic implication is that perceived risk mediates the relationship between enduring product involvement and trust expectation. Further research is needed to elaborate the theoretical relationships among the constructs under consideration.