• Title/Summary/Keyword: Time prediction

Search Result 5,912, Processing Time 0.036 seconds

A Service Life Prediction for Unsound Concrete Under Carbonation Through Probability of Durable Failure (탄산화에 노출된 콘크리트 취약부의 확률론적 내구수명 평가)

  • Kwon, Seung Jun;Park, Sang Soon;Nam, Sang Hyeok;Lho, Byeong Cheol
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.12 no.2
    • /
    • pp.49-58
    • /
    • 2008
  • Generally, steel corrosion occurs in concrete structures due to carbonation in down-town area and underground site and it propagates to degradation of structural performance. In general diagnosis and inspection, only carbonation depth in sound concrete is evaluated but unsound concrete such as joint and cracked area may occur easily in a concrete member due to construction process. In this study, field survey of carbonation for RC columns in down-town area is performed and carbonation depth in joint and cracked concrete including sound area is measured. Probability of durable failure with time is calculated through probability variables such as concrete cover depth and carbonation depth which are obtained from field survey. In addition, service life of the structures is predicted based on the intended probability of durable failure in domestic concrete specification. It is evaluated that in a RC column, various service life is predicted due to local condition and it is rapidly decreased with insufficient cover depth and growth of crack width. It is also evaluated that obtaining cover depth and quality of concrete is very important because the probability of durable failure is closely related with C.O.V. of cover depth.

Groundwater Level Responses due to Moderate·Small Magnitude Earthquakes Using 1Hz groundwater Data (1Hz 지하수 데이터를 활용한 중·소규모 지진으로 인한 지하수위 반응)

  • Gahyeon Lee;Jae Min Lee;Dongkyu Park;Dong-Hun Kim;Jaehoon Jung;Soo-Hyoung Lee
    • Journal of Soil and Groundwater Environment
    • /
    • v.29 no.4
    • /
    • pp.32-43
    • /
    • 2024
  • Recently, numerous earthquakes have caused significant casualties and property damage worldwide, including major events in 2023 (Türkiye, M7.8; Morocco, M6.8) and 2024 (Noto Peninsula, Japan, M7.6; Taiwan, M7.4). In South Korea, the frequency of detectable and noticeable earthquakes has been gradually increasing since the M5.8 Gyeongju Earthquake. Notable recent events include those in Jeju (M4.9), Goesan (M4.1), the East Sea (M4.5), and Gyeongju (M4.0) since 2020. This study, for the first time in South Korea, monitored groundwater levels and temperatures at a 1Hz frequency to observe the responses in groundwater to moderate and small earthquakes primarily occurring within the country. Between April 23, 2023, and May 22, 2023, 17 earthquakes were reported in the East Sea region with magnitudes ranging from M2.0 to M4.5. Analysis of groundwater level responses at the Gangneung observation station revealed fluctuations associated with five of these events. The 1Hz observation data clearly showed groundwater level changes even for small earthquakes, indicating that groundwater is highly sensitive to the frequent small earthquakes recently occurring in South Korea. The analysis confirmed that the maximum amplitude of groundwater level changes due to earthquakes is proportional to the earthquake's magnitude and the distance from the epicenter. These findings highlight the importance of precise 1Hz-level observations in earthquake-groundwater research. This study provides foundational data for earthquake monitoring and prediction and emphasizes the need for ongoing research into monitoring the changes in groundwater parameters (such as aquifer characteristics, quantity/quality, and contaminant migration) induced by various magnitudes of earthquakes that may occur within the country in the future.

In a Time of Change: Reflections on Humanities Research and Methodologies (변화의 시대, 인문학적 변화 연구와 방법에 대한 고찰)

  • Kim Dug-sam
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.49
    • /
    • pp.265-294
    • /
    • 2024
  • This study begins with a question about research methods in humanities. It is grounded in the humanities, focusing on the changes that have brought light and darkness to the humanities, and focusing on discourse regarding research methods that explore those changes. If the role of the humanities is to prevent the proverbial "gray rhino," unlike the sciences, and if the humanities have a role to play in moderating the uncontrollable development of the sciences, what kind of research methods should humanities pursue. Furthermore, what kind of research methods should be pursued in the humanities, in line with the development of the sciences and the changing environment? This study discusses research methods in the humanities as follows: first, in Section 2, I advocate for the collaboration between humanities and scientific methods, utilizing accumulated assets produced by humanities and continuously introducing scientific methods. Prediction of change is highly precise and far-reaching in engineering and the natural sciences. However, it is difficult to approach change in these fields in a macro or integrated manner. Because they are not precise, they are not welcome in disciplines that deal with the real world. This is primarily the responsibility of humanities. Where science focuses on precision, humanities focuses on questions of essence. This is because while the ends of change have varied throughout history, the nature of change has not varied that much. Section 3 then discusses the changing environment, proposals for changes to humanistic research methods, reviews and proposals inductive change research methods, and makes some suggestions for humanistic change research. The data produced by the field of humanities accumulated by humankind in the past is abundant and has a wide range of applications. In the future, we should not only actively accept the results of scientific advances but also actively seek systematic humanistic approaches and utilize them across disciplinary boundaries to find solutions at the intersection of scientific methods and humanistic assets.

Improving the Accuracy of the Mohr Failure Envelope Approximating the Generalized Hoek-Brown Failure Criterion (일반화된 Hoek-Brown 파괴기준식의 근사 Mohr 파괴포락선 정확도 개선)

  • Youn-Kyou Lee
    • Tunnel and Underground Space
    • /
    • v.34 no.4
    • /
    • pp.355-373
    • /
    • 2024
  • The Generalized Hoek-Brown (GHB) criterion is a nonlinear failure criterion specialized for rock engineering applications and has recently seen increased usage. However, the GHB criterion expresses the relationship between minimum and maximum principal stresses at failure, and when GSI≠100, it has disadvantage of being difficult to express as an explicit relationship between the normal and shear stresses acting on the failure plane, i.e., as a Mohr failure envelope. This disadvantage makes it challenging to apply the GHB criterion in numerical analysis techniques such as limit equilibrium analysis, upper-bound limit analysis, and the critical plane approach. Consequently, recent studies have attempted to express the GHB Mohr failure envelope as an approximate analytical formula, and there is still a need for continued interest in related research. This study presents improved formulations for the approximate GHB Mohr failure envelope, offering higher accuracy in predicting shear strength compared to existing formulas. The improved formulation process employs a method to enhance the approximation accuracy of the tangential friction angle and utilizes the tangent line equation of the nonlinear GHB failure envelope to improve the accuracy of shear strength approximation. In the latter part of this paper, the advantages and limitations of the proposed approximate GHB failure envelopes in terms of shear strength prediction accuracy and calculation time are discussed.

Extension Method of Association Rules Using Social Network Analysis (사회연결망 분석을 활용한 연관규칙 확장기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.111-126
    • /
    • 2017
  • Recommender systems based on association rule mining significantly contribute to seller's sales by reducing consumers' time to search for products that they want. Recommendations based on the frequency of transactions such as orders can effectively screen out the products that are statistically marketable among multiple products. A product with a high possibility of sales, however, can be omitted from the recommendation if it records insufficient number of transactions at the beginning of the sale. Products missing from the associated recommendations may lose the chance of exposure to consumers, which leads to a decline in the number of transactions. In turn, diminished transactions may create a vicious circle of lost opportunity to be recommended. Thus, initial sales are likely to remain stagnant for a certain period of time. Products that are susceptible to fashion or seasonality, such as clothing, may be greatly affected. This study was aimed at expanding association rules to include into the list of recommendations those products whose initial trading frequency of transactions is low despite the possibility of high sales. The particular purpose is to predict the strength of the direct connection of two unconnected items through the properties of the paths located between them. An association between two items revealed in transactions can be interpreted as the interaction between them, which can be expressed as a link in a social network whose nodes are items. The first step calculates the centralities of the nodes in the middle of the paths that indirectly connect the two nodes without direct connection. The next step identifies the number of the paths and the shortest among them. These extracts are used as independent variables in the regression analysis to predict future connection strength between the nodes. The strength of the connection between the two nodes of the model, which is defined by the number of nodes between the two nodes, is measured after a certain period of time. The regression analysis results confirm that the number of paths between the two products, the distance of the shortest path, and the number of neighboring items connected to the products are significantly related to their potential strength. This study used actual order transaction data collected for three months from February to April in 2016 from an online commerce company. To reduce the complexity of analytics as the scale of the network grows, the analysis was performed only on miscellaneous goods. Two consecutively purchased items were chosen from each customer's transactions to obtain a pair of antecedent and consequent, which secures a link needed for constituting a social network. The direction of the link was determined in the order in which the goods were purchased. Except for the last ten days of the data collection period, the social network of associated items was built for the extraction of independent variables. The model predicts the number of links to be connected in the next ten days from the explanatory variables. Of the 5,711 previously unconnected links, 611 were newly connected for the last ten days. Through experiments, the proposed model demonstrated excellent predictions. Of the 571 links that the proposed model predicts, 269 were confirmed to have been connected. This is 4.4 times more than the average of 61, which can be found without any prediction model. This study is expected to be useful regarding industries whose new products launch quickly with short life cycles, since their exposure time is critical. Also, it can be used to detect diseases that are rarely found in the early stages of medical treatment because of the low incidence of outbreaks. Since the complexity of the social networking analysis is sensitive to the number of nodes and links that make up the network, this study was conducted in a particular category of miscellaneous goods. Future research should consider that this condition may limit the opportunity to detect unexpected associations between products belonging to different categories of classification.

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.

A Study on Mixed-Mode Survey which Combine the Landline and Mobile Telephone Interviews: The Case of Special Election for the Mayor of Seoul (유.무선전화 병행조사에 대한 연구: 2011년 서울시장 보궐선거 여론조사 사례)

  • Lee, Kyoung-Taeg;Lee, Hwa-Jeong;Hyun, Kyung-Bo
    • Survey Research
    • /
    • v.13 no.1
    • /
    • pp.135-158
    • /
    • 2012
  • Korean telephone surveys have been based on landline telephone directory or RDD(Random Digit Dialing) method. These days, however, there has been an increase of the households with no landline, or households with the line but not willing to register in the directory. Moreover, it is hard to contact young people or office workers who are usually staying out of home in the daytime. Due to these issues above, the predictability of election polls gets weaker. Especially, low accessibility to those who stay out of home when the poll's done, results in predictions with positive inclination toward conservatism. A solution to resolve this problem is to contact respondents by using both mobile and landline phones-via landline phone to those who are at home and via mobile phone to those who are out of home in the daytime(Mixed Mode Survey, hereafter MMS). To conduct MMS, 1) we need to obtain the sampling frames for the landline and mobile surveys, and 2) we need to decide the proportion of sample size of both. In this paper, we propose a heuristic method for conducting MMS. The method uses RDD for the landline phone survey, and the access panel list for the mobile phone survey. The proportion of sample sizes between landline and mobile phones are determined based on the 'Lifestyle and Time Use Study' conducted by Statistics Korea. As a case study, 4 election polls were conducted in the periods of the special election for the mayor of Seoul on Oct 26th, 2011. From the initial 3 polls, reactions and responses regarding the issues raised during the survey period were appropriately covered, and the final poll showed a very close prediction to the real election result.

  • PDF

Determinants of IPO Failure Risk and Price Response in Kosdaq (코스닥 상장 시 실패위험 결정요인과 주가반응에 관한 연구)

  • Oh, Sung-Bae;Nam, Sam-Hyun;Yi, Hwa-Deuk
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.5 no.4
    • /
    • pp.1-34
    • /
    • 2010
  • Recently, failure rates of Kosdaq IPO firms are increasing and their survival rates tend to be very low, and when these firms do fail, often times backed by a number of governmental financial supports, they may inflict severe financial damage to investors, let alone economy as a whole. To ensure investors' confidence in Kosdaq and foster promising and healthy businesses, it is necessary to precisely assess their intrinsic values and survivability. This study investigates what contributed to the failure of IPO firms and analyzed how these elements are factored into corresponding firms' stock returns. Failure risks are assessed at the time of IPO. This paper considers factors reflecting IPO characteristics, a firm's underwriter prestige, auditor's quality, IPO offer price, firm's age, and IPO proceeds. The study further went on to examine how, if at all, these failure risks involved during IPO led to post-IPO stock prices. Sample firms used in this study include 98 Kosdaq firms that have failed and 569 healthy firms that are classified into the same business categories, and Logit models are used in estimate the probability of failure. Empirical results indicate that auditor's quality, IPO offer price, firm's age, and IPO proceeds shown significant relevance to failure risks at the time of IPO. Of other variables, firm's size and ROA, previously deemed significantly related to failure risks, in fact do not show significant relevance to those risks, whereas financial leverage does. This illustrates the efficacy of a model that appropriately reflects the attributes of IPO firms. Also, even though R&D expenditures were believed to be value relevant by previous studies, this study reveals that R&D is not a significant factor related to failure risks. In examing the relation between failure risks and stock prices, this study finds that failure risks are negatively related to 1 or 2 year size-adjusted abnormal returns after IPO. The results of this study may provide useful knowledge for government regulatory officials in contemplating pertinent policy and for credit analysts in their proper evaluation of a firm's credit standing.

  • PDF

Prediction of Forest Fire Danger Rating over the Korean Peninsula with the Digital Forecast Data and Daily Weather Index (DWI) Model (디지털예보자료와 Daily Weather Index (DWI) 모델을 적용한 한반도의 산불발생위험 예측)

  • Won, Myoung-Soo;Lee, Myung-Bo;Lee, Woo-Kyun;Yoon, Suk-Hee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.14 no.1
    • /
    • pp.1-10
    • /
    • 2012
  • Digital Forecast of the Korea Meteorological Administration (KMA) represents 5 km gridded weather forecast over the Korean Peninsula and the surrounding oceanic regions in Korean territory. Digital Forecast provides 12 weather forecast elements such as three-hour interval temperature, sky condition, wind direction, wind speed, relative humidity, wave height, probability of precipitation, 12 hour accumulated rain and snow, as well as daily minimum and maximum temperatures. These forecast elements are updated every three-hour for the next 48 hours regularly. The objective of this study was to construct Forest Fire Danger Rating Systems on the Korean Peninsula (FFDRS_KORP) based on the daily weather index (DWI) and to improve the accuracy using the digital forecast data. We produced the thematic maps of temperature, humidity, and wind speed over the Korean Peninsula to analyze DWI. To calculate DWI of the Korean Peninsula it was applied forest fire occurrence probability model by logistic regression analysis, i.e. $[1+{\exp}\{-(2.494+(0.004{\times}T_{max})-(0.008{\times}EF))\}]^{-1}$. The result of verification test among the real-time observatory data, digital forecast and RDAPS data showed that predicting values of the digital forecast advanced more than those of RDAPS data. The results of the comparison with the average forest fire danger rating index (sampled at 233 administrative districts) and those with the digital weather showed higher relative accuracy than those with the RDAPS data. The coefficient of determination of forest fire danger rating was shown as $R^2$=0.854. There was a difference of 0.5 between the national mean fire danger rating index (70) with the application of the real-time observatory data and that with the digital forecast (70.5).

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.