• Title/Summary/Keyword: uncertain data

Search Result 525, Processing Time 0.023 seconds

The Effect of Situational, Transformational, and Transactional Leadership on Firm Survival During the Crisis of Covid-19: Empirical Evidence from Restaurants Distribution in Thailand

  • Purit PONGPEARCHAN;Jirayu RATTANABORWORN
    • Journal of Distribution Science
    • /
    • v.21 no.8
    • /
    • pp.11-21
    • /
    • 2023
  • Purpose: This study examined the effect of situational, transformational, and transactional leadership on the firm survival of restaurants distribution in Thailand during the COVID-19 pandemic. Following the existing literature, situational, transformational, and transactional leadership are the origin of firm performance leading to firm survival. Therefore, situational, transformational, and transactional leadership were the critical factors in creating the firm implementation of restaurants distribution in Thailand. Research design, data, and methodology: The sample consisted of 400 restaurants in Thailand, and the statistical approach for data analysis was an ordinary least-squares regression. The study analyzed the response bias, validity, and reliability. Results: Significantly, these findings firmly revealed that situational, transformational, and transactional leadership primarily positively affected firm performance. However, the uncertain environmental conditions had a moderate impact, resulting in a negative correlation between the three leadership styles and the company's performance. Conclusions: Despite the COVID-19 situation in Thailand, the research findings show no significant positive correlation between the performance of restaurants distribution and their survival as a business due to the COVID-19 pandemic is rare for firms to endure and survive, including restaurants distribution in Thailand. In conclusion, we have presented practical and theoretical ideas and recommendations for future research.

A Bayesian Inference Model for Landmarks Detection on Mobile Devices (모바일 디바이스 상에서의 특이성 탐지를 위한 베이지안 추론 모델)

  • Hwang, Keum-Sung;Cho, Sung-Bae;Lea, Jong-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.1
    • /
    • pp.35-45
    • /
    • 2007
  • The log data collected from mobile devices contains diverse meaningful and practical personal information. However, this information is usually ignored because of its limitation of memory capacity, computation power and analysis. We propose a novel method that detects landmarks of meaningful information for users by analyzing the log data in distributed modules to overcome the problems of mobile environment. The proposed method adopts Bayesian probabilistic approach to enhance the inference accuracy under the uncertain environments. The new cooperative modularization technique divides Bayesian network into modules to compute efficiently with limited resources. Experiments with artificial data and real data indicate that the result with artificial data is amount to about 84% precision rate and about 76% recall rate, and that including partial matching with real data is about 89% hitting rate.

Normalizing interval data and their use in AHP (구간데이터 정규화와 계층적 분석과정에의 활용)

  • Kim, Eun Young;Ahn, Byeong Seok
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.1-11
    • /
    • 2016
  • Entani and Tanaka (2007) presented a new approach for obtaining interval evaluations suitable for handling uncertain data. Above all, their approach is characterized by the normalization of interval data and thus the elimination of redundant bounds. Further, interval global weights in AHP are derived by using such normalized interval data. In this paper, we present a heuristic method for finding extreme points of interval data, which basically extends the method by Entani and Tanaka (2007), and also helps to obtain normalized interval data. In the second part of this paper, we show that the solutions to the linear program for interval global weights can be obtained by a simple inspection. In the meantime, the absolute dominance proposed by the authors is extended to pairwise dominance which makes it possible to identify at least more dominated alternatives under the same information.

A Predictive Model of the Generator Output Based on the Learning of Performance Data in Power Plant (발전플랜트 성능데이터 학습에 의한 발전기 출력 추정 모델)

  • Yang, HacJin;Kim, Seong Kun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8753-8759
    • /
    • 2015
  • Establishment of analysis procedures and validated performance measurements for generator output is required to maintain stable management of generator output in turbine power generation cycle. We developed turbine expansion model and measurement validation model for the performance calculation of generator using turbine output based on ASME (American Society of Mechanical Engineers) PTC (Performance Test Code). We also developed verification model for uncertain measurement data related to the turbine and generator output. Although the model in previous researches was developed using artificial neural network and kernel regression, the verification model in this paper was based on algorithms through Support Vector Machine (SVM) model to overcome the problems of unmeasured data. The selection procedures of related variables and data window for verification learning was also developed. The model reveals suitability in the estimation procss as the learning error was in the range of about 1%. The learning model can provide validated estimations for corrective performance analysis of turbine cycle output using the predictions of measurement data loss.

SBR-k(Sized-base replacement-k) : File Replacement in Data Grid Environments (SBR-k(Sized-based replacement-k) : 데이터 그리드 환경에서 파일 교체)

  • Park, Hong-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.11
    • /
    • pp.57-64
    • /
    • 2008
  • The data grid computing provides geographically distributed storage resources to solve computational problems with large-scale data. Unlike cache replacement policies in virtual memory or web-caching replacement, an optimal file replacement policy for data grids is the one of the important problems by the fact that file size is very large. The traditional file replacement policies such as LRU(Least Recently Used), LCB-K(Least Cost Beneficial based on K), EBR(Economic-based cache replacement), LVCT(Least Value-based on Caching Time) have the problem that they have to predict requests or need additional resources to file replacement. To solve theses problems, this paper propose SBR-k(Sized-based replacement-k) that replaces files based on file size. The proposed policy considers file size to reduce the number of files corresponding to a requested file rather than forecasting the uncertain future for replacement. The results of the simulation show that hit ratio was similar when the cache size was small, but the proposed policy was superior to traditional policies when the cache size was large.

Studies on the Freezing Time Prediction of Foodstuffs by Plank's Equation of Modification (Plank's Equation의 변형에 의한 식품의 동결시간 예측)

  • Cheong, Jin-Woo;Kong, Jai-Yul
    • Korean Journal of Food Science and Technology
    • /
    • v.20 no.2
    • /
    • pp.280-286
    • /
    • 1988
  • Freezing is becoming incressingly important in the food industry as a means of food preservation since the turn of the century. For quality, processing and economic reasons, it is important to predict the freezing time for foods. A number of models have been proposed to predict freezing time. However, most analytical freezing time prediction techniques apply only to specific freezing conditions. Therefore, it is necessary to develop an improved analytical method for freezing time prediction under various conditions. The objectives of this study, by reviewing previous experimental data obtained by uncertain freezing condition and thermo-physical data, were to develop simple and accurate analytical method for prediction freezing time, and to obtain the freezing time of various foodstuffs by still air freezing and immersion freezing method. The result of this study showed that the proposed method offered better results than the other complex method compared.

  • PDF

러프집합과 계층적 분류구조를 이용한 데이터마이닝에서 분류지식발견

  • Lee, Chul-Heui;Seo, Seon-Hak
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.3
    • /
    • pp.202-209
    • /
    • 2002
  • This paper deals with simplification of classification rules for data mining and rule bases for control systems. Datamining that extracts useful information from such a large amount of data is one of important issues. There are various ways in classification methodologies for data mining such as the decision trees and neural networks, but the result should be explicit and understandable and the classification rules be short and clear. The rough sets theory is an effective technique in extracting knowledge from incomplete and inconsistent data and provides a good solution for classification and approximation by using various attributes effectively This paper investigates granularity of knowledge for reasoning of uncertain concopts by using rough set approximations and uses a hierarchical classification structure that is more effective technique for classification by applying core to upper level. The proposed classification methodology makes analysis of an information system eary and generates minimal classification rules.

Unusual Enhancements of NmF2 in Anyang Ionosonde Data

  • Yun, Jongyeon;Kim, Yong Ha;Kim, Eojin;Kwak, Young-Sil;Hong, Sunhak
    • Journal of Astronomy and Space Sciences
    • /
    • v.30 no.4
    • /
    • pp.223-230
    • /
    • 2013
  • Sudden enhancements of daytime NmF2 appeared in Anyang ionosonde data during summer seasons in 2006-2007. In order to investigate the causes of this unusual enhancement, we compared Anyang NmF2's with the total electron contents (GPS TECs) observed at Daejeon, and also with ionosonde data at at mid-latitude stations. First, we found no similar increase in Daejeon GPS TEC when the sudden enhancements of Anyang NmF2 occurred. Second, we investigated NmF2's observed at other ionosonde stations that use the same ionosonde model and auto-scaling program as the Anyang ionosonde. We found similar enhancements of NmF2 at these ionosonde stations. Moreover, the analysis of ionograms from Athens and Rome showed that there were sporadic-E layers with high electron density during the enhancements in NmF2. The auto-scaling program (ARTIST 4.5) used seems to recognize sporadic-E layer echoes as a F2 layer trace, resulting in the erroneous critical frequency of F2 layer (foF2). Other versions of the ARTIST scaling program also seem to produce similar erroneous results. Therefore we conclude that the sudden enhancements of NmF2 in Anyang data were due to the misrecognition of sporadic-E echoes as a F-layer by the auto-scaling program. We also noticed that although the scaling program flagged confidence level (C-level) of an ionogram as uncertain when a sporadic-E layer occurs, it still automatically computed erroneous foF2's. Therefore one should check the confidence level before using long term ionosonde data that were produced by an auto-scaling program.

Analysis Standardization Layout for Efficient Prediction Model (예측모델 구축을 위한 분석 단계별 레이아웃 표준화 연구)

  • Kim, Hyo-Kwan;Hwang, Won-Yong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.543-549
    • /
    • 2018
  • The importance of prediction is becoming more emphasized, due to the uncertain business environment. In order to implement the predictive model, a number of data engineers and scientists are involved in the project and various prediction ideas are suggested to enhance the model. it takes a long time to validate the model's accuracy. Also It's hard to redesign and develop the code. In this study, development method such as Lego is suggested to find the most efficient idea to integrate various prediction methodologies into one model. This development methodology is possible by setting the same data layout for the development code for each idea. Therefore, it can be validated by each idea and it is easy to add and delete ideas as it is developed in Lego form, which can shorten the entire development process time. Finally, result of test is shown to confirm whether the proposed method is easy to add and delete ideas.

Analyzing on the cause of downstream submergence damages in rural areas with dam discharge using dam management data

  • Sung-Wook Yun;Chan Yu
    • Korean Journal of Agricultural Science
    • /
    • v.50 no.3
    • /
    • pp.373-389
    • /
    • 2023
  • The downstream submergence damages caused during the flood season in 2020, around the Yongdam-dam and five other sites, were analyzed using related dam management data. Hourly- and daily-data were collected from public national websites and to conduct various analyses, such as autocorrelation, partial-correlation, stationary test, trend test, Granger causality, Rescaled analysis, and principal statistical analysis, to find the cause of the catastrophic damages in 2020. The damage surrounding the Yongdam-dam in 2020 was confirmed to be caused by mis-management of the flood season water level. A similar pattern was found downstream of the Namgang- and Hapcheon-dams, however the damage caused via discharges from these dams in same year is uncertain. Conversely, a different pattern from that of the Yongdam-dam was seen in the areas downstream of Sumjingang- and Daecheongdams, in which the management of the flood season water level appeared appropriate and hence, the damages is assumed to have occurred via the increase in the absolute discharge amount from the dams and flood control capacity leakage of the downstream river. Because of the non-stationarity of the management data, we adapted the wavelet transform analysis to observe the behaviors of the dam management data in detail. Based on the results, an increasing trend in the discharge amount was observed from the dams after the year 2000, which may serve as a warning about similar trends in the future. Therefore, additional and continuous research on downstream safety against dam discharges is necessary.