• Title/Summary/Keyword: in-the-making

Search Result 24,754, Processing Time 0.061 seconds

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

Self-Regulatory Mode Effects on Emotion and Customer's Response in Failed Services - Focusing on the moderate effect of attribution processing - (고객의 자기조절성향이 서비스 실패에 따른 부정적 감정과 고객반응에 미치는 영향 - 귀인과정에 따른 조정적 역할을 중심으로 -)

  • Sung, Hyung-Suk;Han, Sang-Lin
    • Asia Marketing Journal
    • /
    • v.12 no.2
    • /
    • pp.83-110
    • /
    • 2010
  • Dissatisfied customers may express their dissatisfaction behaviorally. These behavioral responses may impact the firms' profitability. How do we model the impact of self regulatory orientation on emotions and subsequent customer behaviors? Obviously, the positive and negative emotions experienced in these situations will influence the overall degree of satisfaction or dissatisfaction with the service(Zeelenberg and Pieters 1999). Most likely, these specific emotions will also partly determine the subsequent behavior in relation to the service and service provider, such as the likelihood of complaining, the degree to which customers will switch or repurchase, and the extent of word of mouth communication they will engage in(Zeelenberg and Pieters 2004). This study investigates the antecedents, consequences of negative consumption emotion and the moderate effect of attribution processing in an integrated model(self regulatory mode → specific emotions → behavioral responses). We focused on the fact that regret and disappointment have effects on consumer behavior. Especially, There are essentially two approaches in this research: the valence based approach and the specific emotions approach. The authors indicate theoretically and show empirically that it matters to distinguish these approaches in services research. and The present studies examined the influence of two regulatory mode concerns(Locomotion orientation and Assessment orientation) with making comparisons on experiencing post decisional regret and disappointment(Pierro, Kruglanski, and Higgins 2006; Pierro et al. 2008). When contemplating a decision with a negative outcome, it was predicted that high (vs low) locomotion would induce more disappointment than regret, whereas high (vs low) assessment would induce more regret than disappointment. The validity of the measurement scales was also confirmed by evaluations provided by the participating respondents and an independent advisory panel; samples provided recommendations throughout the primary, exploratory phases of the study. The resulting goodness of fit statistics were RMR or RMSEA of 0.05, GFI and AGFI greater than 0.9, and a chi-square with a 175.11. The indicators of the each constructs were very good measures of variables and had high convergent validity as evidenced by the reliability with a more than 0.9. Some items were deleted leaving those that reflected the cognitive dimension of importance rather than the dimension. The indicators were very good measures and had convergent validity as evidenced by the reliability of 0.9. These results for all constructs indicate the measurement fits the sample data well and is adequate for use. The scale for each factor was set by fixing the factor loading to one of its indicator variables and then applying the maximum likelihood estimation method. The results of the analysis showed that directions of the effects in the model are ultimately supported by the theory underpinning the causal linkages of the model. This research proposed 6 hypotheses on 6 latent variables and tested through structural equation modeling. 6 alternative measurements were compared through statistical significance test of the paths of research model and the overall fitting level of structural equation model and the result was successful. Also, Locomotion orientation more positively influences disappointment when internal attribution is high than low and Assessment orientation more positively influences regret when external attribution is high than low. In sum, The results of our studies suggest that assessment and locomotion concerns, both as chronic individual predispositions and as situationally induced states, influence the amount of people's experienced regret and disappointment. These findings contribute to our understanding of regulatory mode, regret, and disappointment. In previous studies of regulatory mode, relatively little attention has been paid to the post actional evaluative phase of self regulation. The present findings indicate that assessment concerns and locomotion concerns are clearly distinct in this phase, with individuals higher in assessment delving more into possible alternatives to past actions and individuals higher in locomotion engaging less in such reflective thought. What this suggests is that, separate from decreasing the amount of counterfactual thinking per se, individuals with locomotion concerns want to move on, to get on with it. Regret is about the past and not the future. Thus, individuals with locomotion concerns are less likely to experience regret. The results supported our predictions. We discuss the implications of these findings for the nature of regret and disappointment from the perspective of their relation to regulatory mode. Also, self regulatory mode and the specific emotions(disappointment and regret) were assessed and their influence on customers' behavioral responses(inaction, word of mouth) was examined, using a sample of 275 customers. It was found that emotions have a direct impact on behavior over and above the effects of negative emotions and customer behavior. Hence, We argue against incorporating emotions such as regret and disappointment into a specific response measure and in favor of a specific emotions approach on self regulation. Implications for services marketing practice and theory are discussed.

  • PDF

Empirical Analysis on Bitcoin Price Change by Consumer, Industry and Macro-Economy Variables (비트코인 가격 변화에 관한 실증분석: 소비자, 산업, 그리고 거시변수를 중심으로)

  • Lee, Junsik;Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.195-220
    • /
    • 2018
  • In this study, we conducted an empirical analysis of the factors that affect the change of Bitcoin Closing Price. Previous studies have focused on the security of the block chain system, the economic ripple effects caused by the cryptocurrency, legal implications and the acceptance to consumer about cryptocurrency. In various area, cryptocurrency was studied and many researcher and people including government, regardless of country, try to utilize cryptocurrency and applicate to its technology. Despite of rapid and dramatic change of cryptocurrencies' price and growth of its effects, empirical study of the factors affecting the price change of cryptocurrency was lack. There were only a few limited studies, business reports and short working paper. Therefore, it is necessary to determine what factors effect on the change of closing Bitcoin price. For analysis, hypotheses were constructed from three dimensions of consumer, industry, and macroeconomics for analysis, and time series data were collected for variables of each dimension. Consumer variables consist of search traffic of Bitcoin, search traffic of bitcoin ban, search traffic of ransomware and search traffic of war. Industry variables were composed GPU vendors' stock price and memory vendors' stock price. Macro-economy variables were contemplated such as U.S. dollar index futures, FOMC policy interest rates, WTI crude oil price. Using above variables, we did times series regression analysis to find relationship between those variables and change of Bitcoin Closing Price. Before the regression analysis to confirm the relationship between change of Bitcoin Closing Price and the other variables, we performed the Unit-root test to verifying the stationary of time series data to avoid spurious regression. Then, using a stationary data, we did the regression analysis. As a result of the analysis, we found that the change of Bitcoin Closing Price has negative effects with search traffic of 'Bitcoin Ban' and US dollar index futures, while change of GPU vendors' stock price and change of WTI crude oil price showed positive effects. In case of 'Bitcoin Ban', it is directly determining the maintenance or abolition of Bitcoin trade, that's why consumer reacted sensitively and effected on change of Bitcoin Closing Price. GPU is raw material of Bitcoin mining. Generally, increasing of companies' stock price means the growth of the sales of those companies' products and services. GPU's demands increases are indirectly reflected to the GPU vendors' stock price. Making an interpretation, a rise in prices of GPU has put a crimp on the mining of Bitcoin. Consequently, GPU vendors' stock price effects on change of Bitcoin Closing Price. And we confirmed U.S. dollar index futures moved in the opposite direction with change of Bitcoin Closing Price. It moved like Gold. Gold was considered as a safe asset to consumers and it means consumer think that Bitcoin is a safe asset. On the other hand, WTI oil price went Bitcoin Closing Price's way. It implies that Bitcoin are regarded to investment asset like raw materials market's product. The variables that were not significant in the analysis were search traffic of bitcoin, search traffic of ransomware, search traffic of war, memory vendor's stock price, FOMC policy interest rates. In search traffic of bitcoin, we judged that interest in Bitcoin did not lead to purchase of Bitcoin. It means search traffic of Bitcoin didn't reflect all of Bitcoin's demand. So, it implies there are some factors that regulate and mediate the Bitcoin purchase. In search traffic of ransomware, it is hard to say concern of ransomware determined the whole Bitcoin demand. Because only a few people damaged by ransomware and the percentage of hackers requiring Bitcoins was low. Also, its information security problem is events not continuous issues. Search traffic of war was not significant. Like stock market, generally it has negative in relation to war, but exceptional case like Gulf war, it moves stakeholders' profits and environment. We think that this is the same case. In memory vendor stock price, this is because memory vendors' flagship products were not VRAM which is essential for Bitcoin supply. In FOMC policy interest rates, when the interest rate is low, the surplus capital is invested in securities such as stocks. But Bitcoin' price fluctuation was large so it is not recognized as an attractive commodity to the consumers. In addition, unlike the stock market, Bitcoin doesn't have any safety policy such as Circuit breakers and Sidecar. Through this study, we verified what factors effect on change of Bitcoin Closing Price, and interpreted why such change happened. In addition, establishing the characteristics of Bitcoin as a safe asset and investment asset, we provide a guide how consumer, financial institution and government organization approach to the cryptocurrency. Moreover, corroborating the factors affecting change of Bitcoin Closing Price, researcher will get some clue and qualification which factors have to be considered in hereafter cryptocurrency study.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

A Study on the Various Attributes of E-Sport Influencing Flow and Identification (e-스포츠의 다양한 속성이 유동(flow)과 동일시에 미치는 영향에 관한 연구)

  • Suh, Mun-Shik;Ahn, Jin-Woo;Kim, Eun-Young;Um, Seong-Won
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.1
    • /
    • pp.59-80
    • /
    • 2008
  • Recently, e-sports are growing with potentiality as a new industry with conspicuous profit model. But studies that dealing with e-sports are not enough. Hence, proposes of this paper are both to establish basic model that is for the design of e-sport marketing strategy and to contribute toward future studies which are related to e-sports. Recently, the researches to explain sports-sponsorship through the identification theory have been discovered. Many researches say that somewhat proper identification is a requirement for most sponsors to improve the their images which is essential to sponsorship activity. Consequently, the research for sponsorship associated with identification in the e-sports, not in the physical sports is the core sector of this study. We extracted the variables from online's major characteristics and existing sport sponsorship researches. First, because e-sports mean the tournaments or leagues in the use of online game, the main event of the game is likely to call it online game. Online media's attributes are distinguished from those of offline. Especially, interactivity, anonymity, and expandibility as a e-sport game attributes are able to be mentioned. So, these inherent online attributes are examined on the relationship with flow. Second, in physical sports games, Fisher(1998) revealed that team similarity and team attractivity were positively related to team identification. Wann(1996) said that the result of former game influenced the evaluation of the next game, then in turn has an effect on the identification of team supporters. Considering these results in the e-sports side, e-sports gamer' attractivity, similarity, and match result seem to be important precedent variables of the identification with a gamer. So, these e-sport gamer attributes are examined on the relationship with both flow and identification with a gamer. Csikszentmihalyi(1988) defined the term flow as feeling status for him to be making current positive experience optimally. Hoffman and Novak(1996) also said that if a user experienced the flow he would visit a website without any reward. Therefore flow might be positively associated with user's identification with a gamer. And, Swanson(2003) disclosed that team identification influenced the positive results of sponsorship, which included attitude toward sponsors, sponsor patronage, and satisfaction with sponsors. That is, identification with a gamer expect to be connected with corporation identification significantly. According to the above, we can design the following research model. All variables used in this study(interactivity, anonymity, expandibility, attractivity, similarity, match result, flow, identification with a gamer, and identification with a sponsor) definitely were defined operationally underlying precedent researches. Sample collection was carried out to the person who has an experience to have enjoyed e-sports during June 2006. Much portion of samples is men because much more men than women enjoy e-sports in general. Two-step approach was used to test the hypotheses. First, confirmatory factor analysis was committed to guarantee the validity and reliability of variables. The results showed that all variables had not only intensive and discriminant validity, but also reliability. Then, research model was examined with fully structural equation using LISREL 8.3 version. The fitness of the suggested model mostly was at the acceptable level. Shortly speaking about the results, first of all, in e-sports game attributes, only interactivity which is called a basic feature in online situation affected flow positively. Secondly, in e-sports gamer's attributes, similarity with a gamer and match result influenced flow positively, but there was no significant effect in the relationship between the attractivity of a gamer and flow. And as expected, similarity had an effect on identification with a gamer significantly. But unexpectedly attractivity and match result did not influence identification with a gamer significantly. Just the same as the fact verified in the many precedent researches, flow greatly influenced identification with a gamer, and identification with a gamer continually had an influence on the identification with a sponsor significantly. There are some implications in these results. If the sponsor of e-sports supports the pro-game player who absolutely should have the superior ability to others and is similar to the user enjoying e-sports, many amateur gamers will feel much of the flow and identification with a pro-gamer, and then after all, feel the identification with a sponsor. Such identification with a sponsor leads people enjoying e-sports to have purchasing intention for products produced by the sponsor and to make a positive word-of-mouth for those products or the sponsor. For the future studies, we recommend a few ideas. Based on the results of this study, it is necessary to find new variables relating to the e-sports, which is not mentioned in this study. For this work to be possible, qualitative research seems to be needed to consider the inherent e-sport attributes. Finally, to generalize the results related to e-sports, a wide range of generations not a specific generation should be researched.

  • PDF

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Image Watermarking for Copyright Protection of Images on Shopping Mall (쇼핑몰 이미지 저작권보호를 위한 영상 워터마킹)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.147-157
    • /
    • 2013
  • With the advent of the digital environment that can be accessed anytime, anywhere with the introduction of high-speed network, the free distribution and use of digital content were made possible. Ironically this environment is raising a variety of copyright infringement, and product images used in the online shopping mall are pirated frequently. There are many controversial issues whether shopping mall images are creative works or not. According to Supreme Court's decision in 2001, to ad pictures taken with ham products is simply a clone of the appearance of objects to deliver nothing but the decision was not only creative expression. But for the photographer's losses recognized in the advertising photo shoot takes the typical cost was estimated damages. According to Seoul District Court precedents in 2003, if there are the photographer's personality and creativity in the selection of the subject, the composition of the set, the direction and amount of light control, set the angle of the camera, shutter speed, shutter chance, other shooting methods for capturing, developing and printing process, the works should be protected by copyright law by the Court's sentence. In order to receive copyright protection of the shopping mall images by the law, it is simply not to convey the status of the product, the photographer's personality and creativity can be recognized that it requires effort. Accordingly, the cost of making the mall image increases, and the necessity for copyright protection becomes higher. The product images of the online shopping mall have a very unique configuration unlike the general pictures such as portraits and landscape photos and, therefore, the general image watermarking technique can not satisfy the requirements of the image watermarking. Because background of product images commonly used in shopping malls is white or black, or gray scale (gradient) color, it is difficult to utilize the space to embed a watermark and the area is very sensitive even a slight change. In this paper, the characteristics of images used in shopping malls are analyzed and a watermarking technology which is suitable to the shopping mall images is proposed. The proposed image watermarking technology divide a product image into smaller blocks, and the corresponding blocks are transformed by DCT (Discrete Cosine Transform), and then the watermark information was inserted into images using quantization of DCT coefficients. Because uniform treatment of the DCT coefficients for quantization cause visual blocking artifacts, the proposed algorithm used weighted mask which quantizes finely the coefficients located block boundaries and coarsely the coefficients located center area of the block. This mask improves subjective visual quality as well as the objective quality of the images. In addition, in order to improve the safety of the algorithm, the blocks which is embedded the watermark are randomly selected and the turbo code is used to reduce the BER when extracting the watermark. The PSNR(Peak Signal to Noise Ratio) of the shopping mall image watermarked by the proposed algorithm is 40.7~48.5[dB] and BER(Bit Error Rate) after JPEG with QF = 70 is 0. This means the watermarked image is high quality and the algorithm is robust to JPEG compression that is used generally at the online shopping malls. Also, for 40% change in size and 40 degrees of rotation, the BER is 0. In general, the shopping malls are used compressed images with QF which is higher than 90. Because the pirated image is used to replicate from original image, the proposed algorithm can identify the copyright infringement in the most cases. As shown the experimental results, the proposed algorithm is suitable to the shopping mall images with simple background. However, the future study should be carried out to enhance the robustness of the proposed algorithm because the robustness loss is occurred after mask process.

Development of the Accident Prediction Model for Enlisted Men through an Integrated Approach to Datamining and Textmining (데이터 마이닝과 텍스트 마이닝의 통합적 접근을 통한 병사 사고예측 모델 개발)

  • Yoon, Seungjin;Kim, Suhwan;Shin, Kyungshik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.1-17
    • /
    • 2015
  • In this paper, we report what we have observed with regards to a prediction model for the military based on enlisted men's internal(cumulative records) and external data(SNS data). This work is significant in the military's efforts to supervise them. In spite of their effort, many commanders have failed to prevent accidents by their subordinates. One of the important duties of officers' work is to take care of their subordinates in prevention unexpected accidents. However, it is hard to prevent accidents so we must attempt to determine a proper method. Our motivation for presenting this paper is to mate it possible to predict accidents using enlisted men's internal and external data. The biggest issue facing the military is the occurrence of accidents by enlisted men related to maladjustment and the relaxation of military discipline. The core method of preventing accidents by soldiers is to identify problems and manage them quickly. Commanders predict accidents by interviewing their soldiers and observing their surroundings. It requires considerable time and effort and results in a significant difference depending on the capabilities of the commanders. In this paper, we seek to predict accidents with objective data which can easily be obtained. Recently, records of enlisted men as well as SNS communication between commanders and soldiers, make it possible to predict and prevent accidents. This paper concerns the application of data mining to identify their interests, predict accidents and make use of internal and external data (SNS). We propose both a topic analysis and decision tree method. The study is conducted in two steps. First, topic analysis is conducted through the SNS of enlisted men. Second, the decision tree method is used to analyze the internal data with the results of the first analysis. The dependent variable for these analysis is the presence of any accidents. In order to analyze their SNS, we require tools such as text mining and topic analysis. We used SAS Enterprise Miner 12.1, which provides a text miner module. Our approach for finding their interests is composed of three main phases; collecting, topic analysis, and converting topic analysis results into points for using independent variables. In the first phase, we collect enlisted men's SNS data by commender's ID. After gathering unstructured SNS data, the topic analysis phase extracts issues from them. For simplicity, 5 topics(vacation, friends, stress, training, and sports) are extracted from 20,000 articles. In the third phase, using these 5 topics, we quantify them as personal points. After quantifying their topic, we include these results in independent variables which are composed of 15 internal data sets. Then, we make two decision trees. The first tree is composed of their internal data only. The second tree is composed of their external data(SNS) as well as their internal data. After that, we compare the results of misclassification from SAS E-miner. The first model's misclassification is 12.1%. On the other hand, second model's misclassification is 7.8%. This method predicts accidents with an accuracy of approximately 92%. The gap of the two models is 4.3%. Finally, we test if the difference between them is meaningful or not, using the McNemar test. The result of test is considered relevant.(p-value : 0.0003) This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of enlisted men's data. Additionally, various independent variables used in the decision tree model are used as categorical variables instead of continuous variables. So it suffers a loss of information. In spite of extensive efforts to provide prediction models for the military, commanders' predictions are accurate only when they have sufficient data about their subordinates. Our proposed methodology can provide support to decision-making in the military. This study is expected to contribute to the prevention of accidents in the military based on scientific analysis of enlisted men and proper management of them.

A Study on the Daesoon Cosmology of the Correlative Relation between Mugeuk and Taegeuk (무극과 태극 상관연동의 대순우주론 연구)

  • Kim, Yong-hwan
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.33
    • /
    • pp.31-62
    • /
    • 2019
  • The purpose of this article is to study on the Daesoon Cosmology of the Correlative Relation between Mugeuk and Taegeuk. Daesoon cosmology is a cosmology based on the juxtaposition between the Gucheon Sangje and the world. In this article, I would like to say that this theory in Daesoon Thought was developed in three stages: the phase of the Mugeuk Transcendence of Gucheon Sangje, the stage of the Taegeuk Immanence, and the phase of the Grand Opening of the Later World between Mugeuk and Taegeuk as a correlative gentle reign. First of all, the phase of the Mugeuk Transcendence of Gucheon Sangje has been revealed as a yin-yang relationship. The stage of the Taegeuk Immanence represents the togetherness of harmony and co-prosperity between yin and yang, and the phase of the Grand Opening of the Later World between Mukeuk and Taegeuk refers to the unshakable accomplishment of its character and energy. It will be said that this is due to the practical mechanism in the correct balance of yin-yang making a four stage cycle of birth, growth, harvest, and storage. In addition, the Daesoon stage of the settlement of yin and yang is revealed as a change in the growth of all things and the formation of the inner circle. The mental growth reveals the characteristics of everything in the world, each trying to shine at the height of their own respective life as they grow up energetically. The dominant culture of cerebral communion renders a soft and elegant mood and combines yin and yang to elevate the heavenly and earthly period through transcendental change into sympathetic understanding. The stage of the Grand Opening of the Later World between Mugeuk and Taegeuk is one of the earliest days of the lunar month and also the inner circle of Taegeuk. It is in line with Ken Wilbur's integrated model as a step to the true degrees to develop into a world with brightened degrees. It is a beautiful and peaceful scene where celestial maidens play music, the firewood burns, and the scholars command thunder and lightning playfully. Human beings achieve a state of happiness as a free beings who lives as gods upon the earth. This is the world of theGrand Opening of the Later World between Mugeuk and Taegeuk. Daesoon Thought was succeeded by Dojeon in 1958, when Dojeon emerged as the successor in the lineage of religious orthodoxy and was assigned the task of handling Dao in its entirety. In addition, Daesoon is a circle and represents freedom and commonly shared happiness among the populous. Cosmology in the Daesoon Thought will enable us to understand deep dimensions and the identity of members as individuals within an inner circle of correlation between transcendence and immanence. This present study tries to analyze the public effects philologically and also the mutual correlation by utilizing the truthfulness of literature and rational interpretation. The outlook for the future in Daesoon Thought also leads to the one-way communication of Daesoon as a circle.

Directions of Implementing Documentation Strategies for Local Regions (지역 기록화를 위한 도큐멘테이션 전략의 적용)

  • Seol, Moon-Won
    • The Korean Journal of Archival Studies
    • /
    • no.26
    • /
    • pp.103-149
    • /
    • 2010
  • Documentation strategy has been experimented in various subject areas and local regions since late 1980's when it was proposed as archival appraisal and selection methods by archival communities in the United States. Though it was criticized to be too ideal, it needs to shed new light on the potentialities of the strategy for documenting local regions in digital environment. The purpose of this study is to analyse the implementation issues of documentation strategy and to suggest the directions for documenting local regions of Korea through the application of the strategy. The documentation strategy which was developed more than twenty years ago in mostly western countries gives us some implications for documenting local regions even in current digital environments. They are as follows; Firstly, documentation strategy can enhance the value of archivists as well as archives in local regions because archivist should be active shaper of history rather than passive receiver of archives according to the strategy. It can also be a solution for overcoming poor conditions of local archives management in Korea. Secondly, the strategy can encourage cooperation between collecting institutions including museums, libraries, archives, cultural centers, history institutions, etc. in each local region. In the networked environment the cooperation can be achieved more effectively than in traditional environment where the heavy workload of cooperative institutions is needed. Thirdly, the strategy can facilitate solidarity of various groups in local region. According to the analysis of the strategy projects, it is essential to collect their knowledge, passion, and enthusiasm of related groups to effectively implement the strategy. It can also provide a methodology for minor groups of society to document their memories. This study suggests the directions of documenting local regions in consideration of current archival infrastructure of Korean as follows; Firstly, very selective and intensive documentation should be pursued rather than comprehensive one for documenting local regions. Though it is a very political problem to decide what subject has priority for documentation, interests of local community members as well as professional groups should be considered in the decision-making process seriously. Secondly, it is effective to plan integrated representation of local history in the distributed custody of local archives. It would be desirable to implement archival gateway for integrated search and representation of local archives regardless of the location of archives. Thirdly, it is necessary to try digital documentation using Web 2.0 technologies. Documentation strategy as the methodology of selecting and acquiring archives can not avoid subjectivity and prejudices of appraiser completely. To mitigate the problems, open documentation system should be prepared for reflecting different interests of different groups. Fourth, it is desirable to apply a conspectus model used in cooperative collection management of libraries to document local regions digitally. Conspectus can show existing documentation strength and future documentation intensity for each participating institution. Using this, documentation level of each subject area can be set up cooperatively and effectively in the local regions.