Proceedings of the Korea Water Resources Association Conference
/
2015.05a
/
pp.237-237
/
2015
The district of Marlborough has had more than its share of river management projects over the past 150 years, each one uniquely affecting the geomorphology and flood hazard of the Wairau Plains. A major early project was to block the Opawa distributary channel at Conders Bend. The Opawa distributary channel took a third and more of Wairau River floodwaters and was a major increasing threat to Blenheim. The blocking of the Opawa required the Wairau and Lower Wairau rivers to carry greater flood flows more often. Consequently the Lower Wairau River was breaking out of its stopbanks approximately every seven years. The idea of diverting flood waters at Tuamarina by providing a direct diversion to the sea through the beach ridges was conceptualised back around the 1920s however, limits on resources and machinery meant the mission of excavating this diversion didn't become feasible until the 1960s. In 1964 a 10 m wide pilot channel was cut from the sea to Tuamarina with an initial capacity of $700m^3/s$. It was expected that floods would eventually scour this 'Wairau Diversion' to its design channel width of 150 m. This did take many more years than initially thought but after approximately 50 years with a little mechanical assistance the Wairau Diversion reached an adequate capacity. Using the power of the river to erode the channel out to its design width and depth was a brilliant idea that saved many thousands of dollars in construction costs and it is somewhat ironic that it is that very same concept that is now being used to deal with the aggradation problem that the Wairau Diversion has caused. The introduction of the Wairau Diversion did provide some flood relief to the lower reaches of the river but unfortunately as the Diversion channel was eroding and enlarging the Lower Wairau River was aggrading and reducing in capacity due to its inability to pass its sediment load with reduced flood flows. It is estimated that approximately $2,000,000m^3$ of sediment was deposited on the bed of the Lower Wairau River in the time between the Diversion's introduction in 1964 and 2010, raising the Lower Wairau's bed upwards of 1.5m in some locations. A numerical morphological model (MIKE-11 ST) was used to assess a number of options which led to the decision and resource consent to construct an erodible (fuse plug) bank at the head of the Wairau Diversion to divert more frequent scouring-flows ($+400m^3/s$)down the Lower Wairau River. Full control gates were ruled out on the grounds of expense. The initial construction of the erodible bank followed in late 2009 with the bank's level at the fuse location set to overtop and begin washing out at a combined Wairau flow of $1,400m^3/s$ which avoids berm flooding in the Lower Wairau. In the three years since the erodible bank was first constructed the Wairau River has sustained 14 events with recorded flows at Tuamarina above $1,000m^3/s$ and three of events in excess of $2,500m^3/s$. These freshes and floods have resulted in washout and rebuild of the erodible bank eight times with a combined rebuild expenditure of $80,000. Marlborough District Council's Rivers & Drainage Department maintains a regular monitoring program for the bed of the Lower Wairau River, which consists of recurrently surveying a series of standard cross sections and estimating the mean bed level (MBL) at each section as well as an overall MBL change over time. A survey was carried out just prior to the installation of the erodible bank and another survey was carried out earlier this year. The results from this latest survey show for the first time since construction of the Wairau Diversion the Lower Wairau River is enlarging. It is estimated that the entire bed of the Lower Wairau has eroded down by an overall average of 60 mm since the introduction of the erodible bank which equates to a total volume of $260,000m^3$. At a cost of $$0.30/m^3$ this represents excellent value compared to mechanical dredging which would likely be in excess of $$10/m^3$. This confirms that the idea of using the river to enlarge the channel is again working for the Wairau River system and that in time nature's "excavator" will provide a channel capacity that will continue to meet design requirements.
In this study, the exposure amount of IASCC test worker was evaluated by applying the process simulation technology. Using DELMIA Version 5, a commercial process simulation code, IASCC test facility, hot cells, and workers were prepared, and IASCC test activities were implemented, and the cumulative exposure of workers passing through the dose-distributed space could be evaluated through user coding. In order to simulate behavior of workers, human manikins with a degree of freedom of 200 or more imitating the human musculoskeletal system were applied. In order to calculate the worker's exposure, the coordinates, start time, and retention period for each posture were extracted by accessing the sub-information of the human manikin task, and the cumulative exposure was calculated by multiplying the spatial dose value by the posture retention time. The spatial dose for the exposure evaluation was calculated using MCNP6 Version 1.0, and the calculated spatial dose was embedded into the process simulation domain. As a result of comparing and analyzing the results of exposure evaluation by process simulation and typical exposure evaluation, the annual exposure to daily test work in the regular entrance was predicted at similar levels, 0.388 mSv/year and 1.334 mSv/year, respectively. Exposure assessment was also performed on special tasks performed in areas with high spatial doses, and tasks with high exposure could be easily identified, and work improvement plans could be derived intuitively through human manikin posture and spatial dose visualization of the tasks.
Journal of Korean Tunnelling and Underground Space Association
/
v.20
no.6
/
pp.1061-1071
/
2018
A subsea tunnel, being a super-sized underground structure must ensure safety at the time of earthquake, as well as at ordinary times. At the time of earthquake, in particular, of a subsea tunnel, a variety of response behaviors are induced owing to relative rigidity to the surrounding ground, or difference of displacement, so that the behavior characteristics can be hardly anticipated. The investigation aims to understand the behavior characteristics switched by earthquake of an imaginary subsea tunnel which passes through a fault zone having different physical properties from those of the surrounding ground. In order to achieve the aim, dynamic response behaviors of a subsea tunnel which passes through a fault zone were observed by means of indoor experiments. For the sake of improved earthquake resistance, a shape of subsea tunnel to which flexible segments have been applied was considered. Afterward, it is believed that a D/B can be established through 3-dimensional earthquake resistance interpretation of various grounds, on the basis of verified results from the experiments and interpretations under various conditions. The present investigation performed 1 g shaking table test in order to verify the result of 3-dimensional earthquake resistance interpretation. A model considering the similitude (1:100) of a scale-down model test was manufactured, and tests for three (3) Cases were carried out. Incident seismic wave was introduced by artificial seismic wave having both long-period and short-period earthquake properties in the horizontal direction which is rectangular to the processing direction of the tunnel, so that a fault zone was modeled. For numerical analysis, elastic modulus of the fault zone was assumed 1/5 value of the modulus of individual grounds surround the tunnel, in order to simulate a fault zone. Resultantly, reduced acceleration was confirmed with increase of physical properties of the fault zone, and the result from the shaking table test showed the same tendency as the result from 3-dimensional interpretation.
Journal of Korean Society of Disaster and Security
/
v.12
no.2
/
pp.73-82
/
2019
Recently, as the occurrence frequency of sudden floods due to climate change increased, the flood damage on riverside social infrastructures was extended so that there has been a threat of overflow. Therefore, a rapid prediction of potential flooding in riverside social infrastructure is necessary for administrators. However, most current flood forecasting models including hydraulic model have limitations which are the high accuracy of numerical results but longer simulation time. To alleviate such limitation, data driven models using artificial neural network have been widely used. However, there is a limitation that the existing models can not consider the time-series parameters. In this study the water surface elevation of the Hangang River bridge was predicted using the NARX model considering the time-series parameter. And the results of the ANN and RNN models are compared with the NARX model to determine the suitability of NARX model. Using the 10-year hydrological data from 2009 to 2018, 70% of the hydrological data were used for learning and 15% was used for testing and evaluation respectively. As a result of predicting the water surface elevation after 3 hours from the Hangang River bridge in 2018, the ANN, RNN and NARX models for RMSE were 0.20 m, 0.11 m, and 0.09 m, respectively, and 0.12 m, 0.06 m, and 0.05 m for MAE, and 1.56 m, 0.55 m and 0.10 m for peak errors respectively. By analyzing the error of the prediction results considering the time-series parameters, the NARX model is most suitable for predicting water surface elevation. This is because the NARX model can learn the trend of the time series data and also can derive the accurate prediction value even in the high water surface elevation prediction by using the hyperbolic tangent and Rectified Linear Unit function as an activation function. However, the NARX model has a limit to generate a vanishing gradient as the sequence length becomes longer. In the future, the accuracy of the water surface elevation prediction will be examined by using the LSTM model.
Satellite sea surface temperature (SST) composites provide important data for numerical forecasting models and for research on global warming and climate change. In this study, six types of representative SST composite database were collected from 2007 to 2018 and the characteristics of spatial structures of SSTs were analyzed in seas around the Korean Peninsula. The SST composite data were compared with time series of in-situ measurements from ocean meteorological buoys of the Korea Meteorological Administration by analyzing the maximum value of the errors and its occurrence time at each buoy station. High differences between the SST data and in-situ measurements were detected in the western coastal stations, in particular Deokjeokdo and Chilbaldo, with a dominant annual or semi-annual cycle. In Pohang buoy, a high SST difference was observed in the summer of 2013, when cold water appeared in the surface layer due to strong upwelling. As a result of spectrum analysis of the time series SST data, daily satellite SSTs showed similar spectral energy from in-situ measurements at periods longer than one month approximately. On the other hand, the difference of spectral energy between the satellite SSTs and in-situ temperature tended to magnify as the temporal frequency increased. This suggests a possibility that satellite SST composite data may not adequately express the temporal variability of SST in the near-coastal area. The fronts from satellite SST images revealed the differences among the SST databases in terms of spatial structure and magnitude of the oceanic fronts. The spatial scale expressed by the SST composite field was investigated through spatial spectral analysis. As a result, the high-resolution SST composite images expressed the spatial structures of mesoscale ocean phenomena better than other low-resolution SST images. Therefore, in order to express the actual mesoscale ocean phenomenon in more detail, it is necessary to develop more advanced techniques for producing the SST composites.
Asia-Pacific Journal of Business Venturing and Entrepreneurship
/
v.17
no.4
/
pp.193-204
/
2022
This study analyzed the degree of influence of measurement and improvement of customer satisfaction level targeting kiosk users. In modern times, due to the development of technology and the improvement of the online environment, the probability that simple labor tasks will disappear after 10 years is close to 90%. Even in domestic research, it is predicted that 'simple labor jobs' will disappear due to the influence of advanced technology with a probability of about 36%. there is. In particular, as the demand for non-face-to-face services increases due to the Corona 19 virus, which is recently spreading globally, the trend of introducing kiosks has accelerated, and the global market will grow to 83.5 billion won in 2021, showing an average annual growth rate of 8.9%. there is. However, due to the unmanned nature of these kiosks, some consumers still have difficulties in using them, and consumers who are not familiar with the use of these technologies have a negative attitude towards service co-producers due to rejection of non-face-to-face services and anxiety about service errors. Lack of understanding leads to role conflicts between sales clerks and consumers, or inequality is being created in terms of service provision and generations accustomed to using technology. In addition, since kiosk is a representative technology-based self-service industry, if the user feels uncomfortable or requires additional labor, the overall service value decreases and the growth of the kiosk industry itself can be suppressed. It is important. Therefore, interviews were conducted on the main points of direct use with actual users centered on display color scheme, text size, device design, device size, internal UI (interface), amount of information, recognition sensor (barcode, NFC, etc.), Display brightness, self-event, and reaction speed items were extracted. Afterwards, using the questionnaire, the Kano model quality attribute classification of each expected evaluation item was carried out, and Timko's customer satisfaction coefficient, which can be calculated with accurate numerical values The PCSI Index analysis was additionally performed to determine the improvement priorities by finally classifying the improvement impact of the kiosk expected evaluation items through research. As a result, the impact of improvement appears in the order of internal UI (interface), text size, recognition sensor (barcode, NFC, etc.), reaction speed, self-event, display brightness, amount of information, device size, device design, and display color scheme. Through this, we intend to contribute to a comprehensive comparison of kiosk-based research in each field and to set the direction for improvement in the venture industry.
Internet commerce has been growing at a rapid pace for the last decade. Many firms try to reach wider consumer markets by adding the Internet channel to the existing traditional channels. Despite the various benefits of the Internet channel, a significant number of firms failed in managing the new type of channel. Previous studies could not cleary explain these conflicting results associated with the Internet channel. One of the major reasons is most of the previous studies conducted analyses under a specific market condition and claimed that as the impact of Internet channel introduction. Therefore, their results are strongly influenced by the specific market settings. However, firms face various market conditions in the real worlddensity and disutility of using the Internet. The purpose of this study is to investigate the impact of various market environments on a firm's optimal channel strategy by employing a flexible game theory model. We capture various market conditions with consumer density and disutility of using the Internet.
shows the channel structures analyzed in this study. Before the Internet channel is introduced, a monopoly manufacturer sells its products through an independent physical store. From this structure, the manufacturer could introduce its own Internet channel (MI). The independent physical store could also introduce its own Internet channel and coordinate it with the existing physical store (RI). An independent Internet retailer such as Amazon could enter this market (II). In this case, two types of independent retailers compete with each other. In this model, consumers are uniformly distributed on the two dimensional space. Consumer heterogeneity is captured by a consumer's geographical location (ci) and his disutility of using the Internet channel (${\delta}_{N_i}$).
shows various market conditions captured by the two consumer heterogeneities.
(a) illustrates a market with symmetric consumer distributions. The model captures explicitly the asymmetric distributions of consumer disutility in a market as well. In a market like that is represented in
(c), the average consumer disutility of using an Internet store is relatively smaller than that of using a physical store. For example, this case represents the market in which 1) the product is suitable for Internet transactions (e.g., books) or 2) the level of E-Commerce readiness is high such as in Denmark or Finland. On the other hand, the average consumer disutility when using an Internet store is relatively greater than that of using a physical store in a market like (b). Countries like Ukraine and Bulgaria, or the market for "experience goods" such as shoes, could be examples of this market condition.
summarizes the various scenarios of consumer distributions analyzed in this study. The range for disutility of using the Internet (${\delta}_{N_i}$) is held constant, while the range of consumer distribution (${\chi}_i$) varies from -25 to 25, from -50 to 50, from -100 to 100, from -150 to 150, and from -200 to 200.
summarizes the analysis results. As the average travel cost in a market decreases while the average disutility of Internet use remains the same, average retail price, total quantity sold, physical store profit, monopoly manufacturer profit, and thus, total channel profit increase. On the other hand, the quantity sold through the Internet and the profit of the Internet store decrease with a decreasing average travel cost relative to the average disutility of Internet use. We find that a channel that has an advantage over the other kind of channel serves a larger portion of the market. In a market with a high average travel cost, in which the Internet store has a relative advantage over the physical store, for example, the Internet store becomes a mass-retailer serving a larger portion of the market. This result implies that the Internet becomes a more significant distribution channel in those markets characterized by greater geographical dispersion of buyers, or as consumers become more proficient in Internet usage. The results indicate that the degree of price discrimination also varies depending on the distribution of consumer disutility in a market. The manufacturer in a market in which the average travel cost is higher than the average disutility of using the Internet has a stronger incentive for price discrimination than the manufacturer in a market where the average travel cost is relatively lower. We also find that the manufacturer has a stronger incentive to maintain a high price level when the average travel cost in a market is relatively low. Additionally, the retail competition effect due to Internet channel introduction strengthens as average travel cost in a market decreases. This result indicates that a manufacturer's channel power relative to that of the independent physical retailer becomes stronger with a decreasing average travel cost. This implication is counter-intuitive, because it is widely believed that the negative impact of Internet channel introduction on a competing physical retailer is more significant in a market like Russia, where consumers are more geographically dispersed, than in a market like Hong Kong, that has a condensed geographic distribution of consumers.