• Title/Summary/Keyword: Turn-to-Turn Models

Search Result 301, Processing Time 0.029 seconds

Tunnel wall convergence prediction using optimized LSTM deep neural network

  • Arsalan, Mahmoodzadeh;Mohammadreza, Taghizadeh;Adil Hussein, Mohammed;Hawkar Hashim, Ibrahim;Hanan, Samadi;Mokhtar, Mohammadi;Shima, Rashidi
    • Geomechanics and Engineering
    • /
    • v.31 no.6
    • /
    • pp.545-556
    • /
    • 2022
  • Evaluation and optimization of tunnel wall convergence (TWC) plays a vital role in preventing potential problems during tunnel construction and utilization stage. When convergence occurs at a high rate, it can lead to significant problems such as reducing the advance rate and safety, which in turn increases operating costs. In order to design an effective solution, it is important to accurately predict the degree of TWC; this can reduce the level of concern and have a positive effect on the design. With the development of soft computing methods, the use of deep learning algorithms and neural networks in tunnel construction has expanded in recent years. The current study aims to employ the long-short-term memory (LSTM) deep neural network predictor model to predict the TWC, based on 550 data points of observed parameters developed by collecting required data from different tunnelling projects. Among the data collected during the pre-construction and construction phases of the project, 80% is randomly used to train the model and the rest is used to test the model. Several loss functions including root mean square error (RMSE) and coefficient of determination (R2) were used to assess the performance and precision of the applied method. The results of the proposed models indicate an acceptable and reliable accuracy. In fact, the results show that the predicted values are in good agreement with the observed actual data. The proposed model can be considered for use in similar ground and tunneling conditions. It is important to note that this work has the potential to reduce the tunneling uncertainties significantly and make deep learning a valuable tool for planning tunnels.

An Empirical Study on Influencing Factors of Intention to Use Third-Party Mobile Payment Services : Applying the Task-Technology Fit Model (과업기술적합도 모형을 활용한 모바일 간편결제 서비스 이용의도의 영향요인에 대한 실증연구)

  • Kim, So-Dam;Lim, Jay-Ick;Yang, Sung-Byung
    • Journal of Information Technology Services
    • /
    • v.15 no.2
    • /
    • pp.185-201
    • /
    • 2016
  • Recently, due to the rapid development of IT (information technologies), a variety of attempts have been made to incorporate IT into other fields such as finance and manufacturing. Among them, a novel concept in the spotlight is FinTech, which is a combined word of finance and technology. FinTech is a line of business demonstrating an innovation development through IT in the financial service industry. One of the most popular types of FinTech is a third-party mobile payment service (MPS), the examples of which can be easily found in South Korea while the actual use of the service is relatively inactive. Therefore, the main purpose of this paper is to empirically investigate influencing factors of intention to use the third-party MPS. Based on individual characteristics and the task-technology fit model, the research model of the study is developed, with switching cost included as a moderating variable. The results of structural equation model testing with 316 potential users of Kakao Pay, one of the most popular business models of the third-party MPS, show that innate innovativeness, task characteristics, and technology characteristics are positively influencing task-technology fit, which in turn significantly affects intention to use the third-party MPS. The negative moderating role of switching cost is also found. These results could help managers develop better strategies to motivate potential users to participate in their services.

Review of Mathematical Models in Performance Calculation of Screw Compressors

  • Stosic, Nikola;Smith, Ian K.;Kovacevic, Ahmed;Mujic, Elvedin
    • International Journal of Fluid Machinery and Systems
    • /
    • v.4 no.2
    • /
    • pp.271-288
    • /
    • 2011
  • The mathematical modelling of screw compressor processes and its implementation in their design began about 30 years ago with the publication of several pioneering papers on this topic, mainly at Purdue Compressor Conferences. This led to the gradual introduction of computer aided design, which, in turn, resulted in huge improvements in these machines, especially in oil-flooded air compressors, where the market is very competitive. A review of progress in such methods is presented in this paper together with their application in successful compressor designs. As a result of their introduction, even small details are now considered significant in efforts to improve performance and reduce costs. Despite this, there are still possibilities to introduce new methods and procedures for improved rotor profiles, design optimisation for each specified duty and specialized compressor design, all of which can lead to a better product and new areas of application. A review of methods and procedures which lead to modern screw compressor practice is presented in this paper. This paper is intended to give a cross section through activities being done in mathematical modelling of screw compressor process through last five decades. It is expected to serve as a basis for further contributions in the area and as a challenge to the forthcoming generations of scientists and engineers to concentrate their efforts in finding future and more extended approaches and submit their contributions.

The "open incubation model": deriving community-driven value and innovation in the incubation process

  • Xenia, Ziouvelou;Eri, Giannaka;Raimund, Brochler
    • World Technopolis Review
    • /
    • v.4 no.1
    • /
    • pp.11-22
    • /
    • 2015
  • Globalization, increasing technological advancements and dynamic knowledge diffusion are moving our world closer together at a unique scale and pace. At the same time, our rapidly changing society is confronted with major challenges ranging from demographic to economic ones; challenges that necessitate highly innovative solutions, forcing us to reconsider the way that we actually innovate and create shared value. As such the linear, centralized innovation models of the past need to be replaced with new approaches; approaches that are based upon an open and collaborative, global network perspective where all innovation actors strategically network and collaborate, openly distribute their ideas and co-innovate/co-create in a global context utilizing our society's full innovation potential (Innovation 4.0 - Open Innovation 2.0). These emerging innovation paradigms create "an opportunity for a new entrepreneurial renaissance which can drive a Cambrian like explosion of sustainable wealth creation" (Curley 2013). Thus, in order to materialize this entrepreneurial renaissance, it is critical not only to value but also to actively employ this new innovation paradigms so as to derive community-driven shared value that stems from global innovation networks. This paper argues that there is a gap in existing business incubation model that needs to be filled, in that the innovation and entrepreneurship community cannot afford to ignore the emerging innovation paradigms and rely upon closed incubation models but has to adopt an "open incubation" (Ziouvelou 2013). The open incubation model is based on the principles of open innovation, crowdsourcing and co-creation of shared value and enables individual users and innovation stakeholders to strategically network, find collaborators and partners, co-create ideas and prototypes, share their ideas/prototypes and utilize the wisdom of the crowd to assess the value of these project ideas/prototypes, while at the same time find connections/partners, business and technical information, knowledge on start-up related topics, online tools, online content, open data and open educational material and most importantly access to capital and crowd-funding. By introducing a new incubation phase, namely the "interest phase", open incubation bridges the gap between entrepreneurial need and action and addresses the wantpreneurial needs during the innovation conception phase. In this context one such ecosystem that aligns fully with the open incubation model and theoretical approach, is the VOICE ecosystem. VOICE is an international, community-driven innovation and entrepreneurship ecosystem based on open innovation, crowdsourcing and co-creation principles that has no physical location as opposed to traditional business incubators. VOICE aims to tap into the collective intelligence of the crowd and turn their entrepreneurial interest or need into a collaborative project that will result into a prototype and to a successful "crowd-venture".

Evolutionary Optimization of Pulp Digester Process Using D-optimal DOE and RSM

  • Chu, Young-Hwan;Chonghun Han
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.395-395
    • /
    • 2000
  • Optimization of existing processes becomes more important than the past as environmental problems and concerns about energy savings stand out. When we can model a process mathematically, we can easily optimize it by using the model as constraints. However, modeling is very difficult for most chemical processes as they include numerous units together with their correlation and we can hardly obtain parameters. Therefore, optimization that is based on the process models is, in turn, hard to perform. Especially, f3r unknown processes, such as bioprocess or microelectronics materials process, optimization using mathematical model (first principle model) is nearly impossible, as we cannot understand the inside mechanism. Consequently, we propose a few optimization method using empirical model evolutionarily instead of mathematical model. In this method, firstly, designing experiments is executed fur removing unecessary experiments. D-optimal DOE is the most developed one among DOEs. It calculates design points so as to minimize the parameters variances of empirical model. Experiments must be performed in order to see the causation between input variables and output variables as only correlation structure can be detected in historical data. And then, using data generated by experiments, empirical model, i.e. response surface is built by PLS or MLR. Now, as process model is constructed, it is used as objective function for optimization. As the optimum point is a local one. above procedures are repeated while moving to a new experiment region fur finding the global optimum point. As a result of application to the pulp digester benchmark model, kappa number that is an indication fur impurity contents decreased to very low value, 3.0394 from 29.7091. From the result, we can see that the proposed methodology has sufficient good performance fur optimization, and is also applicable to real processes.

  • PDF

Forwarding Protocol Along with Angle Priority in Vehicular Networks (차량 통신망에서 Angle 우선순위를 가진 Forwarding 프로토콜)

  • Yu, Suk-Dea;Lee, Dong-Chun
    • Convergence Security Journal
    • /
    • v.10 no.1
    • /
    • pp.41-48
    • /
    • 2010
  • Greedy protocols show good performance in Vehicular Ad-hoc Networks (VANETs) environment in general. But they make longer routes causing by surroundings or turn out routing failures in some cases when there are many traffic signals which generate empty streets temporary, or there is no merge roads after a road divide into two roads. When a node selects the next node simply using the distance to the destination node, the longer route is made by traditional greedy protocols in some cases and sometimes the route ends up routing failure. Most of traditional greedy protocols just take into account the distance to the destination to select a next node. Each node needs to consider not only the distance to the destination node but also the direction to the destination while routing a packet because of geographical environment. The proposed routing scheme considers both of the distance and the direction for forwarding packets to make a stable route. And the protocol can configure as the surrounding environment. We evaluate the performance of the protocol using two mobility models and network simulations. Most of network performances are improved rather than in compared with traditional greedy protocols.

STARBURST AND AGN CONNECTIONS AND MODELS

  • SCOVILLE NICK
    • Journal of The Korean Astronomical Society
    • /
    • v.36 no.3
    • /
    • pp.167-175
    • /
    • 2003
  • There is accumulating evidence for a strong link between nuclear starbursts and AGN. Molecular gas in the central regions of galaxies plays a critical role in fueling nuclear starburst activity and feeding central AGN. The dense molecular ISM is accreted to the nuclear regions by stellar bars and galactic interactions. Here we describe recent observational results for the OB star forming regions in M51 and the nuclear star burst in Arp 220 - both of which have approximately the same rate of star formation per unit mass of ISM. We suggest that the maximum efficiency for forming young stars is an Eddington-like limit imposed by the radiation pressure of newly formed stars acting on the interstellar dust. This limit corresponds to approximately 500 $L_{\bigodot} / M_{\bigodot}$ for optically thick regions in which the radiation has been degraded to the NIR. Interestingly, we note that some of the same considerations can be important in AGN where the source of fuel is provided by stellar evolution mass-loss or ISM accretion. Most of the stellar mass-loss occurs from evolving red giant stars and whether their mass-loss can be accreted to a central AGN or not depends on the radiative opacity of the mass-loss material. The latter depends on whether the dust survives or is sublimated (due to radiative heating). This, in turn, is determined by the AGN luminosity and the distance of the mass-loss stars from the AGN. Several AGN phenomena such as the broad emission and absorption lines may arise in this stellar mass-loss material. The same radiation pressure limit to the accretion may arise if the AGN fuel is from the ISM since the ISM dust-to-gas ratio is the same as that of stellar mass-loss.

Improved Intelligent Routing Protocol in Vehicle Ad-hoc Networks (차량 Ad-hoc 혹 통신에서 개선된 지능형 경로 프로토콜)

  • Lee, Dong Chun
    • Convergence Security Journal
    • /
    • v.21 no.1
    • /
    • pp.129-135
    • /
    • 2021
  • Greedy protocols show good performance in Vehicular Ad-hoc Networks (VANETs) environment in general. But they make longer routes causing by surroundings or turn out routing failures in some cases when there are many traffic signals which generate empty streets temporary, or there is no merge roads after a road divide into two roads. When a node selects the next node simply using the distance to the destination node, the longer route is made by traditional greedy protocols in some cases and sometimes the route ends up routing failure. Most of traditional greedy protocols just take into account the distance to the destination to select a next node. Each node needs to consider not only the distance to the destination node but also the direction to the destination while routing a packet because of geographical environment. The proposed routing scheme considers both of the distance and the direction for forwarding packets to make a stable route. And the protocol can configure as the surrounding environment. We evaluate the performance of the protocol using two mobility models and network simulations. Most of network performances are improved rather than in compared with traditional greedy protocols.

D4AR - A 4-DIMENSIONAL AUGMENTED REALITY - MODEL FOR AUTOMATION AND VISUALIZATION OF CONSTRUCTION PROGRESS MONITORING

  • Mani Golparvar-Fard;Feniosky Pena-Mora
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.30-31
    • /
    • 2009
  • Early detection of schedule delay in field construction activities is vital to project management. It provides the opportunity to initiate remedial actions and increases the chance of controlling such overruns or minimizing their impacts. This entails project managers to design, implement, and maintain a systematic approach for progress monitoring to promptly identify, process and communicate discrepancies between actual and as-planned performances as early as possible. Despite importance, systematic implementation of progress monitoring is challenging: (1) Current progress monitoring is time-consuming as it needs extensive as-planned and as-built data collection; (2) The excessive amount of work required to be performed may cause human-errors and reduce the quality of manually collected data and since only an approximate visual inspection is usually performed, makes the collected data subjective; (3) Existing methods of progress monitoring are also non-systematic and may also create a time-lag between the time progress is reported and the time progress is actually accomplished; (4) Progress reports are visually complex, and do not reflect spatial aspects of construction; and (5) Current reporting methods increase the time required to describe and explain progress in coordination meetings and in turn could delay the decision making process. In summary, with current methods, it may be not be easy to understand the progress situation clearly and quickly. To overcome such inefficiencies, this research focuses on exploring application of unsorted daily progress photograph logs - available on any construction site - as well as IFC-based 4D models for progress monitoring. Our approach is based on computing, from the images themselves, the photographer's locations and orientations, along with a sparse 3D geometric representation of the as-built scene using daily progress photographs and superimposition of the reconstructed scene over the as-planned 4D model. Within such an environment, progress photographs are registered in the virtual as-planned environment, allowing a large unstructured collection of daily construction images to be interactively explored. In addition, sparse reconstructed scenes superimposed over 4D models allow site images to be geo-registered with the as-planned components and consequently, a location-based image processing technique to be implemented and progress data to be extracted automatically. The result of progress comparison study between as-planned and as-built performances can subsequently be visualized in the D4AR - 4D Augmented Reality - environment using a traffic light metaphor. In such an environment, project participants would be able to: 1) use the 4D as-planned model as a baseline for progress monitoring, compare it to daily construction photographs and study workspace logistics; 2) interactively and remotely explore registered construction photographs in a 3D environment; 3) analyze registered images and quantify as-built progress; 4) measure discrepancies between as-planned and as-built performances; and 5) visually represent progress discrepancies through superimposition of 4D as-planned models over progress photographs, make control decisions and effectively communicate those with project participants. We present our preliminary results on two ongoing construction projects and discuss implementation, perceived benefits and future potential enhancement of this new technology in construction, in all fronts of automatic data collection, processing and communication.

  • PDF

Force limited vibration testing: an evaluation of the computation of C2 for real load and probabilistic source

  • Wijker, J.J.;de Boer, A.;Ellenbroek, M.H.M.
    • Advances in aircraft and spacecraft science
    • /
    • v.2 no.2
    • /
    • pp.217-232
    • /
    • 2015
  • To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications. Besides the random vibration specification, the total mass and the turn-over frequency of the load (test item), $C^2$ is a very important parameter for FLVT. A number of computational methods to estimate $C^2$ are described in the literature, i.e., the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. The motivation of this work is to evaluate the method for the computation of a realistic value of $C^2$ to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand discussed the formal description of getting $C^2$, using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source. Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffness's associated with the natural frequencies. When the random acceleration vibration specification is given the CSMA method is suitable to compute the value of the parameter $C^2$. When no mathematical model of the source can be made available, estimations of the value $C^2$ can be find in literature. In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The chosen probabilistic design parameters have a uniform distribution. The computation of the value $C^2$ can be done in conjunction with the CSMA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively. Data of two cases available from literature have been analyzed and discussed to get more knowledge about the applicability of the probabilistic method.