• Title/Summary/Keyword: Industrial power systems

Search Result 1,481, Processing Time 0.033 seconds

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

The Effect Of Job Insecurity To The Union Commitment, Dual Commitment and The Union-Related Orientation (고용불안이 노조몰입, 이중몰입, 노사관계행동지향성에 미치는 영향에 관한 연구)

  • Son, Heon-I;Jung, Hyun-Woo
    • Management & Information Systems Review
    • /
    • v.34 no.2
    • /
    • pp.131-149
    • /
    • 2015
  • Recently many organizations have engaged in widespread restructuring as well as more flexible usage of labor in an attempt to cut costs and to increase profit. As a result of lays offs resulting from frequent restructuring, many people no longer consider their jobs as permanent positions. many employees have an increased feeling of job insecurity. There structuring and following downsizing have created an uncertain environment within creased fear offer ther job losses. therefore the study of job insecurity is significant. especially To understand the relationship between job security and union-relation behaviors on the industrial relations. The purpose of this study suggested the strategies to company and union. The purpose of this study is to examine how the union-relation behaviors are influenced by the job security. This study built a exploratory model that there is causal relationship of job security to union commitment, dual commitment, and labor related behaviors. For the verification of this study model, the regression analysis was applied to the surveys of 236 members of union that are located in Busan, Gyeongnam, Ulsan, and Pohang. The result of this research shows that the job insecurity is strongly related to the union commitment and union related behaviors. According to the research, the effect that the job security affects union commitment and union related behaviors are positive. With the research outputs, we have discussed about the academic and pragmatic viewpoint. We proposed comprehensive model to verify how the job insecurity affects the union-related behaviors, and objectively analyzed the model. The research result was opposite to what the existing theories have said that high job insecurity derives high union-related behaviors. This result is meaningful because it is concerned with the social issues-present situation of Korean company, low-employment, unstable employment and so on. Moreover, this research may contribute to expand the aspect of academic research on job insecurity as there are few research conducted in korea. This research also suggests the realistic alternative of union-related behaviors because it is proved that job security can contribute to innovation activities. Also, this research implies that the matter of job insecurity is the basic need of organizational individual and presents that job security is not a notion but the alternative by using of the positional stability and situational control power. The limitation of this research is that it is only utilized the cross-sectional study. To remedy the cross-sectional study, vertical, and serial method of research is needed. And there is no enough sample to secure more comprehensive data as the targets of the research is limited to Busan and Gyeongnam regions. Finally, the measurement tool for job security is needed to be suitably modified to by the South Korea's economic, linguistic, and cultural situation.

  • PDF

The knowledge and human resources distribution system for university-industry cooperation (대학에서 창출하는 지적/인적자원에 대한 기업연계 플랫폼: 인문사회계열을 중심으로)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.133-149
    • /
    • 2014
  • One of the main purposes of universities is to create new intellectual resources that will increase social values. These intellectual resources include academic research papers, lecture notes, patents, and creative ideas produced by both professors and students. However, intellectual resources in universities are often not distributed to the actual users or companies; and moreover, they are not even systematically being managed inside of the universities. Therefore, it is almost impossible for companies to access the knowledge created by university students and professors to utilize them. Thus, the current level of knowledge sharing between universities and industries are very low. This causes a great extravagant with high-quality intellectual and human resources, and it leads to quite an amount of social loss in the modern society. In the 21st century, the creative ideas are the key growth powers for many industries. Many of the globally leading companies such as Fedex, Dell, and Facebook have established their business models based on the innovative ideas created by university students in undergraduate courses. This indicates that the unconventional ideas from young generations can create new growth power for companies and immensely increase social values. Therefore, this paper suggests of a new platform for intellectual properties distribution with university-industry cooperation. The suggested platform distributes intellectual resources of universities to industries. This platform has following characteristics. First, it distributes not only the intellectual resources, but also the human resources associated with the knowledge. Second, it diversifies the types of compensation for utilizing the intellectual properties, which are beneficial for both the university students and companies. For example, it extends the conventional monetary rewards to non-monetary rewards such as influencing on the participating internship programs or job interviews. Third, it suggests of a new knowledge map based on the relationships between key words, so that the various types of intellectual properties can be searched efficiently. In order to design the system platform, we surveyed 120 potential users to obtain the system requirements. First, 50 university students and 30 professors in humanities and social sciences departments were surveyed. We sent queries on what types of intellectual resources they produce per year, how many intellectual resources they produce, if they are willing to distribute their intellectual properties to the industries, and what types of compensations they expect in returns. Secondly, 40 entrepreneurs were surveyed, who are potential consumers of the intellectual properties of universities. We sent queries on what types of intellectual resources they want, what types of compensations they are willing to provide in returns, and what are the main factors they considered to be important when searching for the intellectual properties. The implications of this survey are as follows. First, entrepreneurs are willing to utilize intellectual properties created by both professors and students. They are more interested in creative ideas in universities rather than the academic papers or educational class materials. Second, non-monetary rewards, such as participating internship program or job interview, can be the appropriate types of compensations to replace monetary rewards. The results of the survey showed that majority of the university students were willing to provide their intellectual properties without any monetary rewards to earn the industrial networks with companies. Also, the entrepreneurs were willing to provide non-monetary compensation and hoped to have networks with university students for recruiting. Thus, the non-monetary rewards are mutually beneficial for both sides. Thirdly, classifying intellectual resources of universities based on the academic areas are inappropriate for efficient searching. Also, the various types of intellectual resources cannot be categorized into one standard. This paper suggests of a new platform for the distribution of intellectual materials and human resources, with university-industry cooperation based on these survey results. The suggested platform contains the four major components such as knowledge schema, knowledge map, system interface, and GUI (Graphic User Interface), and it presents the overall system architecture.

Comparative Study of Security Services Industry Act and Police Assigned to Special Guard Act - Focused on special guards and police assigned to special guard duty - (경비업법과 청원경찰법의 비교 연구 특수경비원과 청원경찰을 중심으로)

  • Noh, Jin-keo;Lee, Young-ho;Choi, Kyung-cheol
    • Korean Security Journal
    • /
    • no.57
    • /
    • pp.177-203
    • /
    • 2018
  • Police Assigned to Special Guard Act was legislated in 1962 to solve issues regarding the protection of various staple industrial installations, and in 2001, the Security Services Industry Act was revised to establish an effective security system for important national facilities. Thereby the Special Guards System was instituted. The current law has two parts, with the Police Assigned to Special Guard System and Special Guards System, and many scholars have actively discussed the appropriateness of the integration of both systems to solve problems caused by a bimodal system. However, in spite of these discussions taking place in the academic world, the idea of unification lost its power when the guarantee of status regulation was established for the police assigned to special guard. Strictly speaking, police assigned to special guard is a self-guard, and a special guard is a contractual guard. So, both of them have pros and cons. Thus, it would be desirable to give a legal, constitutional guarantee for both systems by strengthening each of them and making up for the weakness of each of them rather than trying to unify police assigned to special guard and special guard. To begin this process, we need to revise unreasonable legal provisions of Security Services Industry Act and Police Assigned to Special Guard Act as below. First, since the actual responsibilities of special guards and police assigned to special guard duty are the same, we need to make the facilities which they use equal. Second, legal provisions need to be revised so that a special guard may perform the duties of a police officer, according to the Act on the Performance of Duties by Police Officers, within the facility that needs to be secured in order to prevent any vacancy in the guarding of an important national facility. Third, disqualifications for the special guards need to be revised to be the same as the disqualifications for the police assigned to special guard duty. Fourth, it is reasonable to unify the training institution for special guards and for police assigned to special guard duty, and it should be the training institution for police. On-the-job education for a security guard needs to be altered to more than 4 hours every month just like the one for police assigned to special guard duty. Fifth, for a special guard, it is not right to limit the conditions in their using weapons to 'use of weapon or explosives' only. If one possesses 'dangerous objects such as weapon, deadly weapon, and so on' and resists, a special guard should be able to use their weapon against that person. Thus, this legal provision should be revised. Sixth, penalty, range of fines, and so on for police assigned to special guard duty need to be revised to be the same as the ones for a special guard. If we revise these legal provisions, we can correct the unreasonable parts of Security Services Industry Act and Police Assigned to Special Guard Act without unifying them. Through these revisions, special guards and police assigned to special guard duty may develop the civilian guard industry wholesomely under the law, and the civilians would have a wider range of options to choose from to receive high quality security service.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Access Restriction by Packet Capturing during the Internet based Class (인터넷을 이용한 수업에서 패킷캡쳐를 통한 사이트 접속 제한)

  • Yi, Jungcheol;Lee, Yong-Jin
    • 대한공업교육학회지
    • /
    • v.32 no.1
    • /
    • pp.134-152
    • /
    • 2007
  • This study deals with the development of computer program which can restrict students to access to the unallowable web sites during the Internet based class. Our suggested program can find the student's access list to the unallowable sites, display it on the teacher's computer screen. Through the limitation of the student's access, teacher can enhance the efficiency of class and fulfill his educational purpose for the class. The use of our results leads to the effective and safe utilization of the Internet as the teaching tools in the class. Meanwhile, the typical method is to turn off the LAN (Local Area Network) power in order to limit the student's access to the unallowable web sites. Our program has been developed on the Linux operating systems in the small network environment. The program includes following five functions: the translation function to change the domain name into the IP(Internet Protocol) address, the search function to find the active students' computers, the packet snoop to capture the ongoing packets and investigate their contents, the comparison function to compare the captured packet contents with the predefined access restriction IP address list, and the restriction function to limit the network access when the destination IP address is equal to the IP address in the access restriction list. Our program can capture all passing packets through the computer laboratory in real time and exactly. In addition, it provides teacher's computer screen with the all relation information of students' access to the unallowable sites. Thus, teacher can limit the student's unallowable access immediately. The proposed program can be applied to the small network of the elementary, junior and senior high school. Our research results make a contribution toward the effective class management and the efficient computer laboratory management. The related researches provides teacher with the packet observation and the access limitation for only one host, but our suggested program provides teacher with those for all active hosts.

A Study on the Competition Strategy for Private Super Market against Super Super Market (슈퍼슈퍼마켓(SSM)에 대한 개인 슈퍼마켓의 경쟁전략에 관한 연구)

  • Yoo, Seung-Woo;Lee, Sang-Youn
    • The Journal of Industrial Distribution & Business
    • /
    • v.2 no.2
    • /
    • pp.39-45
    • /
    • 2011
  • The Korean distribution industry is gearing up for an endless competition. Greeting low growth era, less competitive parties will be challanged seriously for their survival. But for large discount stores, they have shown steady annual growth for years. However, because of the saturation for numbers of stores, the difficulty of gaining new sites, and the changes in the consumer's consumption behavior caused by the recession, now they are seeking for a new customers-based business formats. Accordingly, a large corporate comopanies made supermarkets which are belonged to affiliated companies of large corporate comopanies. They based on the strong buying power, focused on SSM(Super Super Market) ave been aggressively develop nationwide multi-stores. The point is that these stores are threatening at small and medium-sized, community-based private supermarkets. Private supermarkets and retailers, who are using existing old operation systems and their dilapidated facilities, are losing a competitive edge in business. Recent the social effects of large series of corporate supermarkets for traditional markets has been very controversial, and commercial media, academia, and industry associated with it have been held many seminars and public hearings. This may slow down the speed in accordance with the regulations, but will not be the crucial alternative. The reason for this recent surge of enterprise-class SSM up, one of the reasons is a stagnation in their offline discount mart, so they are finding new growth areas. Already in the form of large supermarkets across the country got most of the geographical centre point and is saturated with stages. Targeting small businesses that do not cover discount Mart, in order to expand business in the form of SSM is urgent. By contrast, private supermarkets are going to lose their competitiveness. The vulnerability of individual supermarkets, one of the vulnerabilities is price which economies of scale can not be realized so they are purchasing a small amount of products and difficult to get a quantity discount. The lack of organization and collaboration, and education which is not practical, caused the absencer of service-oriented situations. As a first solution, making specialty shops which are handling agricultures, fruits and vegetables and manufactured goods is recommended. Second, private supermarkets franchisees join the organization for the organization and collaboration is recomaned. It can be meet the scale of economy and can be formed a alternative business formats to a government. Third, the education is needed as a good service will get consumer's awareness. In addition, a psychological stores operating is also one way to stimulate consumer sentiment as SSM can't operate. Japan already has a better conditions of their lives through small chain expression. This study includes the vulnerabilities of private supermarkets, and suggests a competitiveness reinforcement strategies.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF