• Title/Summary/Keyword: Building dimension

Search Result 251, Processing Time 0.024 seconds

The Effect of Brand Familiarity on Green Claim Skepticism in Distribution Channel

  • Belay Addisu KASSIE;Hyongjae RHEE
    • Journal of Distribution Science
    • /
    • v.21 no.6
    • /
    • pp.51-68
    • /
    • 2023
  • Purpose: This study aims to explore the impact of green products' claim skepticism on green purchase intention and further investigates the moderating role of environmental concern in the relationship. This study, by drawing the persuasion knowledge model expected that ambiguity avoidance penalizes less familiar brands than familiar brands. Further, the present study building on Hofstede's cultural dimension, specifically, uncertainty avoidance, undertook a scenario to understand any difference that exist between uncertainty avoidance cultural groups. This study also investigates gender differences in green claim skepticism and proclivity to purchase green products. Research design, data, and methodology: For analyzing the relationship relevant hypotheses were designed, and R-programming software was used. To test the hypotheses two independent sample t-test and regression analysis were carried out. Results: The results suggest that consumers' skepticism toward green claims influenced the intention to purchase eco-friendly products. The study finding also confirms the effect is moderated by environmental concern. Also, the findings of two scenarios reveal that consumers in high uncertainty avoidance culture exhibited a greater level of skepticism for green print advertising and green packaging claims when the brand in the advertising and packaging was unfamiliar than when it was familiar. Conclusions: To alter the negative effect of skepticism the consumer should believe the environmental claims are valid so that they can contribute to solving sustainability issues.

Comparing type-1, interval and general type-2 fuzzy approach for dealing with uncertainties in active control

  • Farzaneh Shahabian Moghaddam;Hashem Shariatmadar
    • Smart Structures and Systems
    • /
    • v.31 no.2
    • /
    • pp.199-212
    • /
    • 2023
  • Nowadays fuzzy logic in control applications is a well-recognized alternative, and this is thanks to its inherent advantages. Generalized type-2 fuzzy sets allow for a third dimension to capture higher order uncertainty and therefore offer a very powerful model for uncertainty handling in real world applications. With the recent advances that allowed the performance of general type-2 fuzzy logic controllers to increase, it is now expected to see the widespread of type-2 fuzzy logic controllers to many challenging applications in particular in problems of structural control, that is the case study in this paper. It should be highlighted that this is the first application of general type-2 fuzzy approach in civil structures. In the following, general type-2 fuzzy logic controller (GT2FLC) will be used for active control of a 9-story nonlinear benchmark building. The design of type-1 and interval type-2 fuzzy logic controllers is also considered for the purpose of comparison with the GT2FLC. The performance of the controller is validated through the computer simulation on MATLAB. It is demonstrated that extra design degrees of freedom achieved by GT2FLC, allow a greater potential to better model and handle the uncertainties involved in the nature of earthquakes and control systems. GT2FLC outperforms successfully a control system that uses T1 and IT2 FLCs.

Optimal sensor placement of retrofitted concrete slabs with nanoparticle strips using novel DECOMAC approach

  • Ali Faghfouri;Hamidreza Vosoughifar;Seyedehzeinab Hosseininejad
    • Smart Structures and Systems
    • /
    • v.31 no.6
    • /
    • pp.545-559
    • /
    • 2023
  • Nanoparticle strips (NPS) are widely used as external reinforcers for two-way reinforced concrete slabs. However, the Structural Health Monitoring (SHM) of these slabs is a very important issue and was evaluated in this study. This study has been done analytically and numerically to optimize the placement of sensors. The properties of slabs and carbon nanotubes as composite sheets were considered isotopic and orthotropic, respectively. The nonlinear Finite Element Method (FEM) approach and suitable optimal placement of sensor approach were developed as a new MATLAB toolbox called DECOMAC by the authors of this paper. The Suitable multi-objective function was considered in optimized processes based on distributed ECOMAC method. Some common concrete slabs in construction with different aspect ratios were considered as case studies. The dimension and distance of nano strips in retrofitting process were selected according to building codes. The results of Optimal Sensor Placement (OSP) by DECOMAC algorithm on un-retrofitted and retrofitted slabs were compared. The statistical analysis according to the Mann-Whitney criteria shows that there is a significant difference between them (mean P-value = 0.61).

Smart City Governance Logic Model Converging Hub-and-spoke Data Management and Blockchain Technology (허브 앤 스포크형 데이터 관리 및 블록체인 기술 융합 스마트도시 거버넌스 로직모델)

  • Choi, Sung-Jin
    • Journal of KIBIM
    • /
    • v.14 no.1
    • /
    • pp.30-38
    • /
    • 2024
  • This study aims to propose a smart city governance logic model that can accommodate more diverse information service systems by mixing hub-and-spoke and blockchain technologies as a data management model. Specifically, the research focuses on deriving the logic of an operating system that can work across smart city planning based on the two data governance technologies. The first step of the logic is the generation and collection of information, which is first divided into information that requires information protection and information that can be shared with the public, and the information that requires privacy is blockchainized, and the shared information is integrated and aggregated in a data hub. The next step is the processing and use of the information, which can actively use the blockchain technology, but for the information that can be shared other than the protected information, the governance logic is built in parallel with the hub-and-spoke type. Next is the logic of the distribution stage, where the key is to establish a service contact point between service providers and beneficiaries. Also, This study proposes the establishment of a one-to-one data exchange relationship between information providers, information consumers, and information processors. Finally, in order to expand and promote citizen participation opportunities through a reasonable compensation system in the operation of smart cities, we developed virtual currency as a local currency and designed an open operation logic of local virtual currency that can operate in the compensation dimension of information.

Acceleration amplification characteristics of embankment reinforced with rubble mound

  • Jung-Won Yun;Jin-Tae Han;Jae-Kwang Ahn
    • Geomechanics and Engineering
    • /
    • v.36 no.2
    • /
    • pp.157-166
    • /
    • 2024
  • Generally, the rubble mound installed on the slope embankment of the open-type wharf is designed based on the impact of wave force, with no consideration for the impact of seismic force. Therefore, in this study, dynamic centrifuge model test results were analyzed to examine the acceleration amplification of embankment reinforced with rubble mound under seismic conditions. The experimental results show that when rubble mounds were installed on the ground surface of the embankment, acceleration response of embankment decreased by approximately 22%, and imbalance in ground settlement decreased significantly from eight to two times. Furthermore, based on the experimental results, one-dimensional site response (1DSR) analyses were conducted. The analysis results indicated that reinforcing the embankment with rubble mound can decrease the peak ground acceleration (PGA) and short period response (below 0.6 seconds) of the ground surface by approximately 28%. However, no significant impact on the long period response (above 0.6 seconds) was observed. Additionally, in ground with lower relative density, a significant decrease in response and wide range of reduced periods were observed. Considering that the reduced short period range corresponds to the critical periods in the design response spectrum, reinforcing the loose ground with rubble mound can effectively decrease the acceleration response of the ground surface.

Development of Safety Training Delivery Method Using 3D Simulation Technology for Construction Worker (건설현장 작업자를 위한 3차원 시뮬레이션 바탕의 안전 교육전달 매체 개발)

  • Ahn, Sungjin;Park, Young Jun;Park, Tae-Hwan;Kim, Tae-Hui
    • Journal of the Korea Institute of Building Construction
    • /
    • v.15 no.6
    • /
    • pp.621-629
    • /
    • 2015
  • Construction worker safety and safety training continue to be main issues in the construction industry. In order to promote safety awareness among workers, it is imperative to develop a more effective and efficient safety training. This study compared two methods in construction worker safety training: 1) a conventional lecture and 2) 3D simulation through Building Information Modeling. Both training methods included the same contents, a selection of safety standard and guide suggested by Occupational Safety and Health Agency and the Korea Occupational Safety and Health Agency; the contents were then produced into two types of training methods. A survey was conducted targeting on safety managers, in which the managers evaluated lifelikeness, active learning and enjoyment that each of training methods can promote. The results of the survey showed that innovative method using 3D simulation was more effective than conventional lecture method in terms of its lifelikeness, active learning and enjoyment. This study will provide implications that innovative method using the virtual reality is more effective than conventional lecture method.

Risk Evaluation and Analysis on Simulation Model of Fire Evacuation based on CFD - Focusing on Incheon Bus Terminal Station (CFD기반 화재 대피 시뮬레이션 모델을 적용한 위험도 평가 분석 -인천터미널역 역사를 대상으로)

  • Kim, Min Gyu;Joo, Yong Jin;Park, Soo Hong
    • Spatial Information Research
    • /
    • v.21 no.6
    • /
    • pp.43-55
    • /
    • 2013
  • Recently, the research to visualize and to reproduce evacuation situations such as terrorism, the disaster and fire indoor space has been come into the spotlight and designing a model for interior space and reliable analysis through safety evaluation of the life is required. Therefore, this paper aims to develop simulation model which is able to suggest evacuation route guidance and safety analysis by considering the major risk factor of fire in actual building. First of all, we designed 3D-based fire and evacuation model at a subway station building in Incheon and performed fire risk analysis through thermal parameters on the basis of interior materials supplied by Incheon Transit Corporation. In order to evaluate safety of a life, ASET (Available Safe Egress Time), which is the time for occupants to endure without damage, and RSET (Required Safe Egress Time) are calculated through evacuation simulation by Fire Dynamics Simulator. Finally, we can come to the conclusion that a more realistic safety assessment is carried out through indoor space model based on 3-dimension building information and simulation analysis applied by safety guideline for measurement of fire and evacuation risk.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

A Study on the Space Forming through Urban Agricultural Theory, Paradigm and Typology (도시농업의 이론, 패러다임, 유형을 통한 공간연구)

  • Chang, Dong-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.2
    • /
    • pp.501-513
    • /
    • 2017
  • This study analyzed the situation of urban agriculture development through theories, paradigms, and typology to determine the application frequency and development keywords about space forming. The results showed that urban space by distance determines "Dimension of space forming" through self-production, public-production, and nation-social operation. Second, the complex space by shape determine "Identity of space forming" through "Flat Shape" for using the widespread land, "Compact Shape" for overcoming the small and poor land, and "Fusion of Flat Compact Shape" for systematic use between Flat and Compact. Third, building and interior space according to location determine the "Utility of space forming" through land, roof, wall, veranda, interior, and infrastructure space. The concepts about space forming of urban agriculture have an organic correlation and will be developed sustainably by the evolved cases from now on. In addition, space forming of urban agriculture produces new creation space by various fusion processes and will be a development trend of new urban agriculture.