• Title/Summary/Keyword: 실제 모델

Search Result 6,798, Processing Time 0.046 seconds

The effect of tunnel ovality on the dynamic behavior of segment lining (Ovality가 세그먼트 라이닝의 동적 거동 특성에 미치는 영향)

  • Gyeong-Ju Yi;Ki-Il Song
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.25 no.6
    • /
    • pp.423-446
    • /
    • 2023
  • Shield TBM tunnel linings are segmented into segments and rings. This study investigates the response characteristics of the stress and displacement of the segment lining under seismic waves through modeling that considers the interface behavior between segments by applying a shell interface element to the contact surface between segments and rings. And there is no management criteria for ovaling deformation of segment linings in Korea. So, this study the ovality criteria and meaning of segment lining. The results of study showed that the distribution patterns of stress and displacement under seismic waves were similar between continuous linings and segment linings. However, the maximum values of stress and displacement showed differences from segment linings. The stress distribution of the continuous lining modeled as a shell type has a stress distribution that has continuity in the 3D cylindrical shape, but the segment lining is concentrated outside the segment, and the largest stress occurs at the location where the contact surface between the segment and the ring is concentrated. This intermittent and localized stress distribution shows an increasing as the ovality of the lining increases at seismic waves. The ovality at which the increase in stress distribution begins to show irregularity and localization is about 150‰. Ovality of 150‰ is an unrealistic value that cannot represent actual lining deformation. Therefore, the ovality of the segment lining increase with depth, but it does not have a significant impact on the stability caused by seismic load.

Analysis of the Effect of Corner Points and Image Resolution in a Mechanical Test Combining Digital Image Processing and Mesh-free Method (디지털 이미지 처리와 강형식 기반의 무요소법을 융합한 시험법의 모서리 점과 이미지 해상도의 영향 분석)

  • Junwon Park;Yeon-Suk Jeong;Young-Cheol Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • In this paper, we present a DIP-MLS testing method that combines digital image processing with a rigid body-based MLS differencing approach to measure mechanical variables and analyze the impact of target location and image resolution. This method assesses the displacement of the target attached to the sample through digital image processing and allocates this displacement to the node displacement of the MLS differencing method, which solely employs nodes to calculate mechanical variables such as stress and strain of the studied object. We propose an effective method to measure the displacement of the target's center of gravity using digital image processing. The calculation of mechanical variables through the MLS differencing method, incorporating image-based target displacement, facilitates easy computation of mechanical variables at arbitrary positions without constraints from meshes or grids. This is achieved by acquiring the accurate displacement history of the test specimen and utilizing the displacement of tracking points with low rigidity. The developed testing method was validated by comparing the measurement results of the sensor with those of the DIP-MLS testing method in a three-point bending test of a rubber beam. Additionally, numerical analysis results simulated only by the MLS differencing method were compared, confirming that the developed method accurately reproduces the actual test and shows good agreement with numerical analysis results before significant deformation. Furthermore, we analyzed the effects of boundary points by applying 46 tracking points, including corner points, to the DIP-MLS testing method. This was compared with using only the internal points of the target, determining the optimal image resolution for this testing method. Through this, we demonstrated that the developed method efficiently addresses the limitations of direct experiments or existing mesh-based simulations. It also suggests that digitalization of the experimental-simulation process is achievable to a considerable extent.

Development of Yóukè Mining System with Yóukè's Travel Demand and Insight Based on Web Search Traffic Information (웹검색 트래픽 정보를 활용한 유커 인바운드 여행 수요 예측 모형 및 유커마이닝 시스템 개발)

  • Choi, Youji;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.155-175
    • /
    • 2017
  • As social data become into the spotlight, mainstream web search engines provide data indicate how many people searched specific keyword: Web Search Traffic data. Web search traffic information is collection of each crowd that search for specific keyword. In a various area, web search traffic can be used as one of useful variables that represent the attention of common users on specific interests. A lot of studies uses web search traffic data to nowcast or forecast social phenomenon such as epidemic prediction, consumer pattern analysis, product life cycle, financial invest modeling and so on. Also web search traffic data have begun to be applied to predict tourist inbound. Proper demand prediction is needed because tourism is high value-added industry as increasing employment and foreign exchange. Among those tourists, especially Chinese tourists: Youke is continuously growing nowadays, Youke has been largest tourist inbound of Korea tourism for many years and tourism profits per one Youke as well. It is important that research into proper demand prediction approaches of Youke in both public and private sector. Accurate tourism demands prediction is important to efficient decision making in a limited resource. This study suggests improved model that reflects latest issue of society by presented the attention from group of individual. Trip abroad is generally high-involvement activity so that potential tourists likely deep into searching for information about their own trip. Web search traffic data presents tourists' attention in the process of preparation their journey instantaneous and dynamic way. So that this study attempted select key words that potential Chinese tourists likely searched out internet. Baidu-Chinese biggest web search engine that share over 80%- provides users with accessing to web search traffic data. Qualitative interview with potential tourists helps us to understand the information search behavior before a trip and identify the keywords for this study. Selected key words of web search traffic are categorized by how much directly related to "Korean Tourism" in a three levels. Classifying categories helps to find out which keyword can explain Youke inbound demands from close one to far one as distance of category. Web search traffic data of each key words gathered by web crawler developed to crawling web search data onto Baidu Index. Using automatically gathered variable data, linear model is designed by multiple regression analysis for suitable for operational application of decision and policy making because of easiness to explanation about variables' effective relationship. After regression linear models have composed, comparing with model composed traditional variables and model additional input web search traffic data variables to traditional model has conducted by significance and R squared. after comparing performance of models, final model is composed. Final regression model has improved explanation and advantage of real-time immediacy and convenience than traditional model. Furthermore, this study demonstrates system intuitively visualized to general use -Youke Mining solution has several functions of tourist decision making including embed final regression model. Youke Mining solution has algorithm based on data science and well-designed simple interface. In the end this research suggests three significant meanings on theoretical, practical and political aspects. Theoretically, Youke Mining system and the model in this research are the first step on the Youke inbound prediction using interactive and instant variable: web search traffic information represents tourists' attention while prepare their trip. Baidu web search traffic data has more than 80% of web search engine market. Practically, Baidu data could represent attention of the potential tourists who prepare their own tour as real-time. Finally, in political way, designed Chinese tourist demands prediction model based on web search traffic can be used to tourism decision making for efficient managing of resource and optimizing opportunity for successful policy.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The Causes of Conflict and the Effect of Control Mechanisms on Conflict Resolution between Manufacturer and Supplier (제조-공급자간 갈등 원인과 거래조정 방식의 갈등관리 효과)

  • Rhee, Jin Hwa
    • Journal of Distribution Research
    • /
    • v.17 no.4
    • /
    • pp.55-80
    • /
    • 2012
  • I. Introduction Developing the relationships between companies is very important issue to ensure a competitive advantage in today's business environment (Bleeke & Ernst 1991; Mohr & Spekman 1994; Powell 1990). Partnerships between companies are based on having same goals, pursuing mutual understanding, and having a professional level of interdependence. By having such a partnerships and cooperative efforts between companies, they will achieve efficiency and effectiveness of their business (Mohr and Spekman, 1994). However, it is difficult to expect these ideal results only in the B2B corporate transaction. According to agency theory which is the well-accepted theory in various fields of business strategy, organization, and marketing, the two independent companies have fundamentally different corporate purposes. Also there is a higher chance of developing opportunism and conflict due to natures of human(organization), such as self-interest, bounded rationality, risk aversion, and environment factor as imbalance of information (Eisenhardt 1989). That is, especially partnerships between principal(or buyer) and agent(or supplier) of companies within supply chain, the business contract itself will not provide competitive advantage. But managing partnership between companies is the key to success. Therefore, managing partnership between manufacturer and supplier, and finding causes of conflict are essential to improve B2B performance. In conclusion, based on prior researches and Agency theory, this study will clarify how business hazards cause conflicts on supply chain and then identify how developed conflicts have been managed by two control mechanisms. II. Research model III. Method In order to validate our research model, this study gathered questionnaires from small and medium sized enterprises(SMEs). In Korea, SMEs mean the firms whose employee is under 300 and capital is under 8 billion won(about 7.2 million dollar). We asked the manufacturer's perception about the relationship with the biggest supplier, and our key informants are denied to a person responsible for buying(ex)CEO, executives, managers of purchasing department, and so on). In detail, we contact by telephone to our initial sample(about 1,200 firms) and introduce our research motivation and send our questionnaires by e-mail, mail, and direct survey. Finally we received 361 data and eliminate 32 inappropriate questionnaires. We use 329 manufactures' data on analysis. The purpose of this study is to identify the anticipant role of business hazard (environmental dynamism, asset specificity) and investigate the moderating effect of control mechanism(formal control, social control) on conflict-performance relationship. To find out moderating effect of control methods, we need to compare the regression weight between low versus. high group(about level of exercised control methods). Therefore we choose the structural equation modeling method that is proper to do multi-group analysis. The data analysis is performed by AMOS 17.0 software, and model fits are good statically (CMIN/DF=1.982, p<.000, CFI=.936, IFI=.937, RMSEA=.056). IV. Result V. Discussion Results show that the higher environmental dynamism and asset specificity(on particular supplier) buyer(manufacturer) has, the more B2B conflict exists. And this conflict affect relationship quality and financial outcomes negatively. In addition, social control and formal control could weaken the negative effect of conflict on relationship quality significantly. However, unlikely to assure conflict resolution effect of control mechanisms on relationship quality, financial outcomes are changed by neither social control nor formal control. We could explain this results with the characteristics of our sample, SMEs(Small and Medium sized Enterprises). Financial outcomes of these SMEs(manufacturer or principal) are affected by their customer(usually major company) more easily than their supplier(or agent). And, in recent few years, most of companies have suffered from financial problems because of global economic recession. It means that it is hard to evaluate the contribution of supplier(agent). Therefore we also support the suggestion of Gladstein(1984), Poppo & Zenger(2002) that relational performance variable can capture the focal outcomes of relationship(exchange) better than financial performance variable. This study has some implications that it tests the sources of conflict and investigates the effect of resolution methods of B2B conflict empirically. And, especially, it finds out the significant moderating effect of formal control which past B2B management studies have ignored in Korea.

  • PDF

Strategy for Store Management Using SOM Based on RFM (RFM 기반 SOM을 이용한 매장관리 전략 도출)

  • Jeong, Yoon Jeong;Choi, Il Young;Kim, Jae Kyeong;Choi, Ju Choel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.93-112
    • /
    • 2015
  • Depending on the change in consumer's consumption pattern, existing retail shop has evolved in hypermarket or convenience store offering grocery and daily products mostly. Therefore, it is important to maintain the inventory levels and proper product configuration for effectively utilize the limited space in the retail store and increasing sales. Accordingly, this study proposed proper product configuration and inventory level strategy based on RFM(Recency, Frequency, Monetary) model and SOM(self-organizing map) for manage the retail shop effectively. RFM model is analytic model to analyze customer behaviors based on the past customer's buying activities. And it can differentiates important customers from large data by three variables. R represents recency, which refers to the last purchase of commodities. The latest consuming customer has bigger R. F represents frequency, which refers to the number of transactions in a particular period and M represents monetary, which refers to consumption money amount in a particular period. Thus, RFM method has been known to be a very effective model for customer segmentation. In this study, using a normalized value of the RFM variables, SOM cluster analysis was performed. SOM is regarded as one of the most distinguished artificial neural network models in the unsupervised learning tool space. It is a popular tool for clustering and visualization of high dimensional data in such a way that similar items are grouped spatially close to one another. In particular, it has been successfully applied in various technical fields for finding patterns. In our research, the procedure tries to find sales patterns by analyzing product sales records with Recency, Frequency and Monetary values. And to suggest a business strategy, we conduct the decision tree based on SOM results. To validate the proposed procedure in this study, we adopted the M-mart data collected between 2014.01.01~2014.12.31. Each product get the value of R, F, M, and they are clustered by 9 using SOM. And we also performed three tests using the weekday data, weekend data, whole data in order to analyze the sales pattern change. In order to propose the strategy of each cluster, we examine the criteria of product clustering. The clusters through the SOM can be explained by the characteristics of these clusters of decision trees. As a result, we can suggest the inventory management strategy of each 9 clusters through the suggested procedures of the study. The highest of all three value(R, F, M) cluster's products need to have high level of the inventory as well as to be disposed in a place where it can be increasing customer's path. In contrast, the lowest of all three value(R, F, M) cluster's products need to have low level of inventory as well as to be disposed in a place where visibility is low. The highest R value cluster's products is usually new releases products, and need to be placed on the front of the store. And, manager should decrease inventory levels gradually in the highest F value cluster's products purchased in the past. Because, we assume that cluster has lower R value and the M value than the average value of good. And it can be deduced that product are sold poorly in recent days and total sales also will be lower than the frequency. The procedure presented in this study is expected to contribute to raising the profitability of the retail store. The paper is organized as follows. The second chapter briefly reviews the literature related to this study. The third chapter suggests procedures for research proposals, and the fourth chapter applied suggested procedure using the actual product sales data. Finally, the fifth chapter described the conclusion of the study and further research.

The Alignment Evaluation for Patient Positioning System(PPS) of Gamma Knife PerfexionTM (감마나이프 퍼펙션의 자동환자이송장치에 대한 정렬됨 평가)

  • Jin, Seong Jin;Kim, Gyeong Rip;Hur, Beong Ik
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.3
    • /
    • pp.203-209
    • /
    • 2020
  • The purpose of this study is to assess the mechanical stability and alignment of the patient positioning system (PPS) of Leksell Gamma Knife Perfexion(LGK PFX). The alignment of the PPS of the LGK PFX was evaluated through measurements of the deviation of the coincidence of the Radiological Focus Point(RFP) and the PPS Calibration Center Point(CCP) applying different weights on the couch(0, 50, 60, 70, 80, and 90 kg). In measurements, a service diode test tool with three diode detectors being used biannually at the time of the routine preventive maintenance was used. The test conducted with varying weights on the PPS using the service diode test tool measured the radial deviations for all three collimators 4, 8, and 16 mm and also for three different positions of the PPS. In order to evaluate the alignment of the PPS, the radial deviations of the correspondence of the radiation focus and the LGK calibration center point of multiple beams were averaged using the calibrated service diode test tool at three university hospitals in Busan and Gyeongnam. Looking at the center diode for all collimators 4, 8, and 16 mm without weight on the PPS, and examining the short and long diodes for the 4 mm collimator, the means of the validation difference, i.e., the radial deviation for the setting of 4, 8, and 16 mm collimators for the center diode were respectively measured to 0.058 ± 0.023, 0.079 ± 0.023, and 0.097 ± 0.049 mm, and when the 4 mm collimator was applied to the center diode, the short diode, and the long diode, the average of the radial deviation was respectively 0.058 ± 0.023, 0.078 ± 0.01 and 0.070 ± 0.023 mm. The average of the radial deviations when irradiating 8 and 16 mm collimators on short and long diodes without weight are measured to 0.07 ± 0.003(8 mm sd), 0.153 ± 0.002 mm(16 mm sd) and 0.031 ± 0.014(8 mm ld), 0.175 ± 0.01 mm(16 mm ld) respectively. When various weights of 50 to 90 kg are placed on the PPS, the average of radial deviation when irradiated to the center diode for 4, 8, and 16 mm is 0.061 ± 0.041 to 0.075 ± 0.015, 0.023 ± 0.004 to 0.034 ± 0.003, and 0.158 ± 0.08 to 0.17 ± 0.043 mm, respectively. In addition, in the same situation, when the short diode for 4, 8, and 16 mm was irradiated, the averages of radial deviations were 0.063 ± 0.024 to 0.07 ± 0.017, 0.037 ± 0.006 to 0.059 ± 0.001, and 0.154 ± 0.03 to 0.165 ± 0.07 mm, respectively. In addition, when irradiated on long diode for 4, 8, and 16 mm, the averages of radial deviations were measured to be 0.102 ± 0.029 to 0.124 ± 0.036, 0.035 ± 0.004 to 0.054 ± 0.02, and 0.183 ± 0.092 to 0.202 ± 0.012 mm, respectively. It was confirmed that all the verification results performed were in accordance with the manufacturer's allowable deviation criteria. It was found that weight dependence was negligible as a result of measuring the alignment according to various weights placed on the PPS that mimics the actual treatment environment. In particular, no further adjustment or recalibration of the PPS was required during the verification. It has been confirmed that the verification test of the PPS according to various weights is suitable for normal Quality Assurance of LGK PFX.