• Title/Summary/Keyword: logistic information system

Search Result 336, Processing Time 0.034 seconds

A case study on algorithm development and software materialization for logistics optimization (기업 물류망 최적 설계 및 운영을 위한 알고리즘 설계 및 소프트웨어 구현 사례)

  • Han, Jae-Hyun;Kim, Jang-Yeop;Kim, Ji-Hyun;Jeong, Suk-Jae
    • Journal of the Korea Safety Management & Science
    • /
    • v.14 no.4
    • /
    • pp.153-168
    • /
    • 2012
  • It has been recognized as an important issue to design optimally a firm's logistics network for minimizing logistics cost and maximizing customer service. It is, however, not easy to get an optimal solution by analyzing trade-off of cost factors, dynamic and interdependent characteristics in the logistics network decision making. Although there has been some developments in a system which helps decision making for logistics analysis, it is true that there is no system for enterprise-wise's on-site support and methodical logistics decision. Specially, E-biz process along with information technology has been made dramatic advance in a various industries, there has been much need for practical education closely resembles on-site work. The software developed by this study materializes efficient algorithm suggested by recent studies in key topics of logistics such as location and allocation problem, traveling salesman problem, and vehicle routing problem and transportation and distribution problem. It also supports executing a variety of experimental design and analysis in a way of the most user friendly based on Java. In the near future, we expect that it can be extended to integrated supply chain solution by adding decision making in production in addition to a decision in logistics.

Impact of Off-Hour Hospital Presentation on Mortality in Different Subtypes of Acute Stroke in Korea : National Emergency Department Information System Data

  • Kim, Taikwan;Jwa, Cheolsu
    • Journal of Korean Neurosurgical Society
    • /
    • v.64 no.1
    • /
    • pp.51-59
    • /
    • 2021
  • Objective : Several studies have reported inconsistent findings among countries on whether off-hour hospital presentation is associated with worse outcome in patients with acute stroke. However, its association is yet not clear and has not been thoroughly studied in Korea. We assessed nationwide administrative data to verify off-hour effect in different subtypes of acute stroke in Korea. Methods : We respectively analyzed the nationwide administrative data of National Emergency Department Information System in Korea; 7144 of ischemic stroke (IS), 2424 of intracerebral hemorrhage (ICH), and 1482 of subarachnoid hemorrhage (SAH), respectively. "Off-hour hospital presentation" was defined as weekends, holidays, and any times except 8:00 AM to 6:00 PM on weekdays. The primary outcome measure was in-hospital mortality in different subtypes of acute stroke. We adjusted for covariates to influence the primary outcome using binary logistic regression model and Cox's proportional hazard model. Results : In subjects with IS, off-hour hospital presentation was associated with unfavorable outcome (24.6% off hours vs. 20.9% working hours, p<0.001) and in-hospital mortality (5.3% off hours vs. 3.9% working hours, p=0.004), even after adjustment for compounding variables (hazard ratio [HR], 1.244; 95% confidence interval [CI], 1.106-1.400; HR, 1.402; 95% CI, 1.124-1.747, respectively). Off-hours had significantly more elderly ≥65 years (35.4% off hours vs. 32.1% working hours, p=0.029) and significantly more frequent intensive care unit admission (32.5% off hours vs. 29.9% working hours, p=0.017) than working hours. However, off-hour hospital presentation was not related to poor short-term outcome in subjects with ICH and SAH. Conclusion : This study indicates that off-hour hospital presentation may lead to poor short-term morbidity and mortality in patients with IS, but not in patients with ICH and SAH in Korea. Excessive death seems to be ascribed to old age or the higher severity of medical conditions apart from that of stroke during off hours.

Real-time Laying Hens Sound Analysis System using MFCC Feature Vectors

  • Jeon, Heung Seok;Na, Deayoung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.127-135
    • /
    • 2021
  • Raising large numbers of animals in very narrow environments such as laying hens house can be very damaged from small environmental change. Previously researched about laying hens sound analysis system has a problem for applying to the laying hens house because considering only the limited situation of laying hens house. In this paper, to solve the problem, we propose a new laying hens sound analysis model using MFCC feature vector. This model can detect 7 situations that occur in actual laying hens house through 9 kinds of laying hens sound analysis. As a result of the performance evaluation of the proposed laying hens sound analysis model, the average AUC was 0.93, which is about 43% higher than that of the frequency feature analysis method.

Aerial Scene Labeling Based on Convolutional Neural Networks (Convolutional Neural Networks기반 항공영상 영역분할 및 분류)

  • Na, Jong-Pil;Hwang, Seung-Jun;Park, Seung-Je;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.6
    • /
    • pp.484-491
    • /
    • 2015
  • Aerial scene is greatly increased by the introduction and supply of the image due to the growth of digital optical imaging technology and development of the UAV. It has been used as the extraction of ground properties, classification, change detection, image fusion and mapping based on the aerial image. In particular, in the image analysis and utilization of deep learning algorithm it has shown a new paradigm to overcome the limitation of the field of pattern recognition. This paper presents the possibility to apply a more wide range and various fields through the segmentation and classification of aerial scene based on the Deep learning(ConvNet). We build 4-classes image database consists of Road, Building, Yard, Forest total 3000. Each of the classes has a certain pattern, the results with feature vector map come out differently. Our system consists of feature extraction, classification and training. Feature extraction is built up of two layers based on ConvNet. And then, it is classified by using the Multilayer perceptron and Logistic regression, the algorithm as a classification process.

Digital Chaotic Communication System Based on CDSK Modulation (CDSK 방식의 디지털 카오스 통신 시스템)

  • Bok, Junyeong;Ryu, Heung-Gyoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.6
    • /
    • pp.479-485
    • /
    • 2013
  • Recently, interest for wireless communication technology with improved security and low eavesdropping probability is increasing rapidly recognizing that information security is an important. Chaos signal can be used encode information efficiently due to irregular phenomena. Chaotic signal is very sensitive to the initial condition. Chaos signal is difficult to detect the signal if you do not know the initial conditions. Also, chaotic signal has robustness to multipath interference. In this paper, we evaluate the performance of correlation delay shift keying (CDSK) modulation with different chaotic map such as Tent map, Logistic map, Henon map, and Bernoulli shift map. Also, we analyze the BER performance depending on the selection of spreading factor (SF) in CDSK. Through the theoretical analyses and simulations, it is confirmed that Henon map has better BER performance than the other three chaotic maps when spreading factor is 70.

The Design of Blog Network Analysis System using Map/Reduce Programming Model (Map/Reduce를 이용한 블로그 연결망 분석 시스템 설계)

  • Joe, In-Whee;Park, Jae-Kyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9B
    • /
    • pp.1259-1265
    • /
    • 2010
  • Recently, on-line social network has been increasing according to development of internet. The most representative service is blog. A Blog is a type of personal web site, usually maintained by an individual with regular entries of commentary. These blogs are related to each other, and it is called Blog Network in this paper. In a blog network, posts in a blog can be diffused to other blogs. Analyzing information diffusion in a blog world is a very useful research issue, which can be used for predicting information diffusion, abnormally detection, marketing, and revitalizing the blog world. Existing studies on network analysis have no consideration for the passage of time and these approaches can only measure network activity for a node by the number of direct connections that a given node has. As one solution, this paper suggests the new method of measuring the blog network activity using logistic curve model and Cosine-similarity in key words by the Map/Reduce programming model.

Stock Market Prediction Using Sentiment on YouTube Channels (유튜브 주식채널의 감성을 활용한 코스피 수익률 등락 예측)

  • Su-Ji, Cho;Cheol-Won Yang;Ki-Kwang Lee
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.2
    • /
    • pp.102-108
    • /
    • 2023
  • Recently in Korea, YouTube stock channels increased rapidly due to the high social interest in the stock market during the COVID-19 period. Accordingly, the role of new media channels such as YouTube is attracting attention in the process of generating and disseminating market information. Nevertheless, prior studies on the market forecasting power of YouTube stock channels remain insignificant. In this study, the market forecasting power of the information from the YouTube stock channel was examined and compared with traditional news media. To measure information from each YouTube stock channel and news media, positive and negative opinions were extracted. As a result of the analysis, opinion in channels operated by media outlets were found to be leading indicators of KOSPI market returns among YouTube stock channels. The prediction accuracy by using logistic regression model show 74%. On the other hand, Sampro TV, a popular YouTube stock channel, and the traditional news media simply reported the market situation of the day or instead showed a tendency to lag behind the market. This study is differentiated from previous studies in that it verified the market predictive power of the information provided by the YouTube stock channel, which has recently shown a growing trend in Korea. In the future, the results of advanced analysis can be confirmed by expanding the research results for individual stocks.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

Improvement of generalization of linear model through data augmentation based on Central Limit Theorem (데이터 증가를 통한 선형 모델의 일반화 성능 개량 (중심극한정리를 기반으로))

  • Hwang, Doohwan
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.19-31
    • /
    • 2022
  • In Machine learning, we usually divide the entire data into training data and test data, train the model using training data, and use test data to determine the accuracy and generalization performance of the model. In the case of models with low generalization performance, the prediction accuracy of newly data is significantly reduced, and the model is said to be overfit. This study is about a method of generating training data based on central limit theorem and combining it with existed training data to increase normality and using this data to train models and increase generalization performance. To this, data were generated using sample mean and standard deviation for each feature of the data by utilizing the characteristic of central limit theorem, and new training data was constructed by combining them with existed training data. To determine the degree of increase in normality, the Kolmogorov-Smirnov normality test was conducted, and it was confirmed that the new training data showed increased normality compared to the existed data. Generalization performance was measured through differences in prediction accuracy for training data and test data. As a result of measuring the degree of increase in generalization performance by applying this to K-Nearest Neighbors (KNN), Logistic Regression, and Linear Discriminant Analysis (LDA), it was confirmed that generalization performance was improved for KNN, a non-parametric technique, and LDA, which assumes normality between model building.