• Title/Summary/Keyword: 정보효과

Search Result 20,169, Processing Time 0.055 seconds

Product Community Analysis Using Opinion Mining and Network Analysis: Movie Performance Prediction Case (오피니언 마이닝과 네트워크 분석을 활용한 상품 커뮤니티 분석: 영화 흥행성과 예측 사례)

  • Jin, Yu;Kim, Jungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.49-65
    • /
    • 2014
  • Word of Mouth (WOM) is a behavior used by consumers to transfer or communicate their product or service experience to other consumers. Due to the popularity of social media such as Facebook, Twitter, blogs, and online communities, electronic WOM (e-WOM) has become important to the success of products or services. As a result, most enterprises pay close attention to e-WOM for their products or services. This is especially important for movies, as these are experiential products. This paper aims to identify the network factors of an online movie community that impact box office revenue using social network analysis. In addition to traditional WOM factors (volume and valence of WOM), network centrality measures of the online community are included as influential factors in box office revenue. Based on previous research results, we develop five hypotheses on the relationships between potential influential factors (WOM volume, WOM valence, degree centrality, betweenness centrality, closeness centrality) and box office revenue. The first hypothesis is that the accumulated volume of WOM in online product communities is positively related to the total revenue of movies. The second hypothesis is that the accumulated valence of WOM in online product communities is positively related to the total revenue of movies. The third hypothesis is that the average of degree centralities of reviewers in online product communities is positively related to the total revenue of movies. The fourth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. The fifth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. To verify our research model, we collect movie review data from the Internet Movie Database (IMDb), which is a representative online movie community, and movie revenue data from the Box-Office-Mojo website. The movies in this analysis include weekly top-10 movies from September 1, 2012, to September 1, 2013, with in total. We collect movie metadata such as screening periods and user ratings; and community data in IMDb including reviewer identification, review content, review times, responder identification, reply content, reply times, and reply relationships. For the same period, the revenue data from Box-Office-Mojo is collected on a weekly basis. Movie community networks are constructed based on reply relationships between reviewers. Using a social network analysis tool, NodeXL, we calculate the averages of three centralities including degree, betweenness, and closeness centrality for each movie. Correlation analysis of focal variables and the dependent variable (final revenue) shows that three centrality measures are highly correlated, prompting us to perform multiple regressions separately with each centrality measure. Consistent with previous research results, our regression analysis results show that the volume and valence of WOM are positively related to the final box office revenue of movies. Moreover, the averages of betweenness centralities from initial community networks impact the final movie revenues. However, both of the averages of degree centralities and closeness centralities do not influence final movie performance. Based on the regression results, three hypotheses, 1, 2, and 4, are accepted, and two hypotheses, 3 and 5, are rejected. This study tries to link the network structure of e-WOM on online product communities with the product's performance. Based on the analysis of a real online movie community, the results show that online community network structures can work as a predictor of movie performance. The results show that the betweenness centralities of the reviewer community are critical for the prediction of movie performance. However, degree centralities and closeness centralities do not influence movie performance. As future research topics, similar analyses are required for other product categories such as electronic goods and online content to generalize the study results.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Analysis of shopping website visit types and shopping pattern (쇼핑 웹사이트 탐색 유형과 방문 패턴 분석)

  • Choi, Kyungbin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.85-107
    • /
    • 2019
  • Online consumers browse products belonging to a particular product line or brand for purchase, or simply leave a wide range of navigation without making purchase. The research on the behavior and purchase of online consumers has been steadily progressed, and related services and applications based on behavior data of consumers have been developed in practice. In recent years, customization strategies and recommendation systems of consumers have been utilized due to the development of big data technology, and attempts are being made to optimize users' shopping experience. However, even in such an attempt, it is very unlikely that online consumers will actually be able to visit the website and switch to the purchase stage. This is because online consumers do not just visit the website to purchase products but use and browse the websites differently according to their shopping motives and purposes. Therefore, it is important to analyze various types of visits as well as visits to purchase, which is important for understanding the behaviors of online consumers. In this study, we explored the clustering analysis of session based on click stream data of e-commerce company in order to explain diversity and complexity of search behavior of online consumers and typified search behavior. For the analysis, we converted data points of more than 8 million pages units into visit units' sessions, resulting in a total of over 500,000 website visit sessions. For each visit session, 12 characteristics such as page view, duration, search diversity, and page type concentration were extracted for clustering analysis. Considering the size of the data set, we performed the analysis using the Mini-Batch K-means algorithm, which has advantages in terms of learning speed and efficiency while maintaining the clustering performance similar to that of the clustering algorithm K-means. The most optimized number of clusters was derived from four, and the differences in session unit characteristics and purchasing rates were identified for each cluster. The online consumer visits the website several times and learns about the product and decides the purchase. In order to analyze the purchasing process over several visits of the online consumer, we constructed the visiting sequence data of the consumer based on the navigation patterns in the web site derived clustering analysis. The visit sequence data includes a series of visiting sequences until one purchase is made, and the items constituting one sequence become cluster labels derived from the foregoing. We have separately established a sequence data for consumers who have made purchases and data on visits for consumers who have only explored products without making purchases during the same period of time. And then sequential pattern mining was applied to extract frequent patterns from each sequence data. The minimum support is set to 10%, and frequent patterns consist of a sequence of cluster labels. While there are common derived patterns in both sequence data, there are also frequent patterns derived only from one side of sequence data. We found that the consumers who made purchases through the comparative analysis of the extracted frequent patterns showed the visiting pattern to decide to purchase the product repeatedly while searching for the specific product. The implication of this study is that we analyze the search type of online consumers by using large - scale click stream data and analyze the patterns of them to explain the behavior of purchasing process with data-driven point. Most studies that typology of online consumers have focused on the characteristics of the type and what factors are key in distinguishing that type. In this study, we carried out an analysis to type the behavior of online consumers, and further analyzed what order the types could be organized into one another and become a series of search patterns. In addition, online retailers will be able to try to improve their purchasing conversion through marketing strategies and recommendations for various types of visit and will be able to evaluate the effect of the strategy through changes in consumers' visit patterns.

Target-Aspect-Sentiment Joint Detection with CNN Auxiliary Loss for Aspect-Based Sentiment Analysis (CNN 보조 손실을 이용한 차원 기반 감성 분석)

  • Jeon, Min Jin;Hwang, Ji Won;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.1-22
    • /
    • 2021
  • Aspect Based Sentiment Analysis (ABSA), which analyzes sentiment based on aspects that appear in the text, is drawing attention because it can be used in various business industries. ABSA is a study that analyzes sentiment by aspects for multiple aspects that a text has. It is being studied in various forms depending on the purpose, such as analyzing all targets or just aspects and sentiments. Here, the aspect refers to the property of a target, and the target refers to the text that causes the sentiment. For example, for restaurant reviews, you could set the aspect into food taste, food price, quality of service, mood of the restaurant, etc. Also, if there is a review that says, "The pasta was delicious, but the salad was not," the words "steak" and "salad," which are directly mentioned in the sentence, become the "target." So far, in ABSA, most studies have analyzed sentiment only based on aspects or targets. However, even with the same aspects or targets, sentiment analysis may be inaccurate. Instances would be when aspects or sentiment are divided or when sentiment exists without a target. For example, sentences like, "Pizza and the salad were good, but the steak was disappointing." Although the aspect of this sentence is limited to "food," conflicting sentiments coexist. In addition, in the case of sentences such as "Shrimp was delicious, but the price was extravagant," although the target here is "shrimp," there are opposite sentiments coexisting that are dependent on the aspect. Finally, in sentences like "The food arrived too late and is cold now." there is no target (NULL), but it transmits a negative sentiment toward the aspect "service." Like this, failure to consider both aspects and targets - when sentiment or aspect is divided or when sentiment exists without a target - creates a dual dependency problem. To address this problem, this research analyzes sentiment by considering both aspects and targets (Target-Aspect-Sentiment Detection, hereby TASD). This study detected the limitations of existing research in the field of TASD: local contexts are not fully captured, and the number of epochs and batch size dramatically lowers the F1-score. The current model excels in spotting overall context and relations between each word. However, it struggles with phrases in the local context and is relatively slow when learning. Therefore, this study tries to improve the model's performance. To achieve the objective of this research, we additionally used auxiliary loss in aspect-sentiment classification by constructing CNN(Convolutional Neural Network) layers parallel to existing models. If existing models have analyzed aspect-sentiment through BERT encoding, Pooler, and Linear layers, this research added CNN layer-adaptive average pooling to existing models, and learning was progressed by adding additional loss values for aspect-sentiment to existing loss. In other words, when learning, the auxiliary loss, computed through CNN layers, allowed the local context to be captured more fitted. After learning, the model is designed to do aspect-sentiment analysis through the existing method. To evaluate the performance of this model, two datasets, SemEval-2015 task 12 and SemEval-2016 task 5, were used and the f1-score increased compared to the existing models. When the batch was 8 and epoch was 5, the difference was largest between the F1-score of existing models and this study with 29 and 45, respectively. Even when batch and epoch were adjusted, the F1-scores were higher than the existing models. It can be said that even when the batch and epoch numbers were small, they can be learned effectively compared to the existing models. Therefore, it can be useful in situations where resources are limited. Through this study, aspect-based sentiments can be more accurately analyzed. Through various uses in business, such as development or establishing marketing strategies, both consumers and sellers will be able to make efficient decisions. In addition, it is believed that the model can be fully learned and utilized by small businesses, those that do not have much data, given that they use a pre-training model and recorded a relatively high F1-score even with limited resources.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Thermal Properties of Granite from the Central Part of Korea (한국 중부 지역의 화강암 열물성)

  • Kim, Jongchan;Lee, Youngmin;Koo, Min-Ho
    • Economic and Environmental Geology
    • /
    • v.47 no.4
    • /
    • pp.441-453
    • /
    • 2014
  • Thermal and physical properties were measured on 206 Jurassic granite samples obtained from three boreholes in the central part of Korea. Thermal conductivity(${\lambda}$), thermal diffusivity(${\alpha}$), and specific heat(Cp) were measured in a laboratory; the average values are ${\lambda}$=2.813 W/mK, ${\alpha}=1.296mm^2/sec$, and Cp=0.816 J/gK, respectively. In addition, porosity(${\phi}$), and dry and saturated density(${\rho}$) were measured in the laboratory; the average values are ${\phi}$=0.01, ${\rho}(dry)=2.662g/cm^3$ and ${\rho}(saturated)=2.67g/cm^3$, respectively. Thermal diffusivity of 10 granite samples were measured with increasing temperature from $25^{\circ}C$ to $200^{\circ}C$. In this study, we found that thermal diffusivity at $200^{\circ}C$ is about 30% lower than thermal diffusivity at $25^{\circ}C$. In correlation analysis, thermal conductivity increases with increasing thermal diffusivity. However, thermal conductivity does not show good correlation with porosity and density. Consequently, we know that thermal conductivity of granite would be more influenced by mineral composition than by porosity. We also derived ${\rho}=-2.393{\times}{\phi}+2.705$ from density and porosity data. XRD and XRF analysis were performed to investigate effects of mineral and chemical composition on thermal conductivity. From those results, we found that thermal conductivity increases with increasing quartz and $SiO_2$, and decreases with increasing albite and $Al_2O_3$. Regression analysis using those mineral and chemical composition were carried out ; we found $K=0.0294V_{Quartz}+1.93$ for quartz, $K=0.237W_{SiO_2}-14.09$ for $SiO_2$, and $K=0.053W_{SiO_2}-0.476W_{Al_2O_3}+6.52$ for $SiO_2$ and $Al_2O_3$. Specific gravities were measured on 10 granite samples in the laboratory. The measured specific gravity depends on chemical compositions of granite. Therefore, specific gravity can be estimated by the felsic-mafic index(F) that is calculated from chemical composition. The estimated specific gravity ranges from 2.643 to 2.658. The average relative error between measured and estimated specific gravities is 0.677%.

Control of Kimchi Fermentation by the Addition of Natural Antimicrobial Agents Originated from Plants (식물유래 천연항균물질 첨가에 의한 김치의 발효조절)

  • Seo, Hyun-Sun;Kim, Seonhwa;Kim, Jinsol;Han, Jaejoon;Ryu, Jee-Hoon
    • Korean Journal of Food Science and Technology
    • /
    • v.45 no.5
    • /
    • pp.583-589
    • /
    • 2013
  • We investigated the delay of kimchi fermentation by the addition of plant extracts. Fifteen plant extracts were screened for inhibitory activity aginst Lactobacillus plantarum by using an agar well diffusion assay, and determined the minimal inhibitory concentration (MIC) and minimal lethal concentration (MLC) were determined. The lowest MIC for grapefruit seed extract (GFSE; 0.0313 mg/mL) was determined, followed by Caesalpinia sappan L. extract (CSLE; 0.25 mg/mL), and oregano essential oil (OREO; 1.0 mg/mL). GFSE, CSLE, and OREO were individually added to kimchi, and incubated the samples at 10 for up to 20 days. Results showed that the addition of GFSE (0.3 and 0.5%), CSLE (0.1, 0.3, and 0.5%), or OREO (0.5 and 1.0%) led to a significant increase in the pH of kimchi, and also a significant reduction in the numbers of lactic acid bacteria. Taken together, the addition of natural antimicrobial agents can delay kimchi fermentation.

Effect of Work Improvement for Promotion of Outpatient Satisfaction on CT scan (CT 외래환자의 만족도 향상을 위한 업무개선 연구)

  • Han, Man-Seok;Lee, Seung-Youl;Lee, Myeong-Goo;Jeon, Min-Cheol;Cho, Jae-Hwan;Kim, Tae-Hyung
    • Journal of radiological science and technology
    • /
    • v.35 no.1
    • /
    • pp.45-50
    • /
    • 2012
  • Nowadays, most of the hospital serves "one stop service" for CT scan. The patients could be taken the CT scan in the day they register for scan. On the contrary to the time convenience, patients are not satisfied with long waiting time and unkindness of staff. The objective of this study is to improve the patient's satisfaction for the CT scan, by analyzing inconvenience factors and improving the service qualities. From April 1 to August 30 in 2011, we investigated the satisfaction of patients who did examined abdomen CT scan with contrast media. We analyzed the 89 questionnaires before and after the service improvements from them. The worker's kindness, the environment of CT room and understanding about CT scan were answered by questionnaire and the waiting time of a day CT scan was drawn by medical information statistics. Also, the period before improvement was from April to June and the period after improvement was from July to September. And these questionnaire was analyzed through SPSS V. 15.0. In this study, kindness of staff, environment of CT room, intelligibility for CT scan and waiting time was explored and analyzed by SPSS V.15.0. The score of kindness was improved by 32%, satisfaction level of the environment was improved by 52.54%. The understanding level about CT scan was improved by 52.36% and the wating time of a day CT was shortened by 21% through our service enhancement programs. Consequentially, it is considered that these efforts would contribute to increase the revenue of hospital.

Fun of Animation-on the Correlation among the Perceptive fun, the Cognitive fun and the Psychological fun (애니메이션의 재미 - 감각적 재미, 인지적 재미, 심리적 재미의 상관관계)

  • Sung, Re-A
    • Cartoon and Animation Studies
    • /
    • s.33
    • /
    • pp.99-126
    • /
    • 2013
  • This study is meant to be seeing how fun of animation works by reviewing it theoretically and coordinating it to suggest the structure which integrates fun of animation and validates the proposed fun model. After reviewing fun theoretically, the fun of animation could be able to coordinate that fun of animation is consist of perceptive fun, cognitive fun, and psychological fun. Perceptive fun is induced by visual, auditory and other sensory information and it is directly affected the image, sound, and movement. Cognitive fun can be obtained by reasoning and interpretation to mobilize their knowledge with sensuously perceived stimulation and it is directly affected the story. Psychological fun occurs when the audience see the animation. The psychological fun is the psychological emotional state when the audience watches animation by relieving psychological congestion. It consists of fun of unfamiliarity or identification. By suggesting research model and validating it how the perceptive fun, cognitive fun, and psychological fun affects each other, perceptive fun enhances cognitive fun and psychological fun. Although cognitive fun enhances psychological fun, cognitive fun enhances psychological fun twice than perceptive fun. Also when perceptive fun affects psychological fun, cognitive fun shows the indirect effect as a parameter. In conclusion, perceptive fun affects psychological fun directly and be enhanced through cognitive fun. Fun of animation can be experienced when perceptive fun caused by accepting sensory information of animation instantly, cognitive fun caused by interpretation and understanding sensory information of animation, and psychological fun caused by relieving psychological identity through recognition fuses and acts as one. An animation emphasized a certain element is difficult to be loved by the audience. In this reason, an harmonical combination among the elements of story, image, sound and movement are important to combinate harmoniously for a successful animation to make the audiences fun by arising funny emotions.

A Study on the Realities and the Subject of Environmental Management for Small and Medium-Sized Companies in Gangwon Area (강원지역 중소기업의 환경경영 실태와 과제)

  • Jeon, Yeong-Seung;Park, Eun-Jeong
    • Korean Business Review
    • /
    • v.17
    • /
    • pp.53-81
    • /
    • 2004
  • The purpose of this study is to understand the realities and the subject of environmental management for small and medium-sized companies in Gwangwon area, through surveying the present status as to acquiring the certification of ISO14001, and to seek for a plan to facilitate environmental management. Given summarizing key results, those are as follows. First, while the number of companies in our country which acquired the certification of ISO14001, amounts to 1,215 businesses as of April of 2003, the number of small and medium-sized companies in Gwangwon area which obtained the certification of ISO14001 reached only 26 businesses, the lowest level among metropolitan municipalities. Second, for the reason that companies who didn't acquire the certification, strive not to receive the certification, it did present the point that' costs to be needed in acquiring and maintaining the certification are larger than practical benefit. Third, the biggest reason for either companies which did not acquire the certification of ISO14001 or companies which did (try to) acquire the certification of ISO1400, was, enhancement of a corporate image,' and the effect after a company who obtained the certification introduced the environmental management system, was also shown to be 'the improvement of a corporate image.' Fourth, many companies who acquired the certification of ISO1400 pointed out the response related to 'burden on document creation and costs' and 'lack of manpower' as problems when introducing the environmental management system. On the basis of major results of a study as the above, given presenting the subject and a plan for activating the environmental management of small and medium-sized companies in Gwangwon area, those are as follows. First, because most of companies who did not obtain the certification of ISO1400 have low recognition of ISO14001, it needs continuous and positive publicity, education and a training system. Second, it requires to carry out an educational program to nurture professional manpower due to lack of manpower relevant to environmental management, to expand payment of subsidies, to open exclusive-charge department and consulting contact, to have the relevant information be database and to develop software. Third, in order to make the certification obtained through inexpensive costs and simple procedures, it needs to positively consider the creation of public approval system for a small and medium-sized company, group approval system, industrial-complex approval system, and others.

  • PDF