• Title/Summary/Keyword: Work-time

Search Result 10,173, Processing Time 0.041 seconds

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

The Research on Recommender for New Customers Using Collaborative Filtering and Social Network Analysis (협력필터링과 사회연결망을 이용한 신규고객 추천방법에 대한 연구)

  • Shin, Chang-Hoon;Lee, Ji-Won;Yang, Han-Na;Choi, Il Young
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.19-42
    • /
    • 2012
  • Consumer consumption patterns are shifting rapidly as buyers migrate from offline markets to e-commerce routes, such as shopping channels on TV and internet shopping malls. In the offline markets consumers go shopping, see the shopping items, and choose from them. Recently consumers tend towards buying at shopping sites free from time and place. However, as e-commerce markets continue to expand, customers are complaining that it is becoming a bigger hassle to shop online. In the online shopping, shoppers have very limited information on the products. The delivered products can be different from what they have wanted. This case results to purchase cancellation. Because these things happen frequently, they are likely to refer to the consumer reviews and companies should be concerned about consumer's voice. E-commerce is a very important marketing tool for suppliers. It can recommend products to customers and connect them directly with suppliers with just a click of a button. The recommender system is being studied in various ways. Some of the more prominent ones include recommendation based on best-seller and demographics, contents filtering, and collaborative filtering. However, these systems all share two weaknesses : they cannot recommend products to consumers on a personal level, and they cannot recommend products to new consumers with no buying history. To fix these problems, we can use the information which has been collected from the questionnaires about their demographics and preference ratings. But, consumers feel these questionnaires are a burden and are unlikely to provide correct information. This study investigates combining collaborative filtering with the centrality of social network analysis. This centrality measure provides the information to infer the preference of new consumers from the shopping history of existing and previous ones. While the past researches had focused on the existing consumers with similar shopping patterns, this study tried to improve the accuracy of recommendation with all shopping information, which included not only similar shopping patterns but also dissimilar ones. Data used in this study, Movie Lens' data, was made by Group Lens research Project Team at University of Minnesota to recommend movies with a collaborative filtering technique. This data was built from the questionnaires of 943 respondents which gave the information on the preference ratings on 1,684 movies. Total data of 100,000 was organized by time, with initial data of 50,000 being existing customers and the latter 50,000 being new customers. The proposed recommender system consists of three systems : [+] group recommender system, [-] group recommender system, and integrated recommender system. [+] group recommender system looks at customers with similar buying patterns as 'neighbors', whereas [-] group recommender system looks at customers with opposite buying patterns as 'contraries'. Integrated recommender system uses both of the aforementioned recommender systems to recommend movies that both recommender systems pick. The study of three systems allows us to find the most suitable recommender system that will optimize accuracy and customer satisfaction. Our analysis showed that integrated recommender system is the best solution among the three systems studied, followed by [-] group recommended system and [+] group recommender system. This result conforms to the intuition that the accuracy of recommendation can be improved using all the relevant information. We provided contour maps and graphs to easily compare the accuracy of each recommender system. Although we saw improvement on accuracy with the integrated recommender system, we must remember that this research is based on static data with no live customers. In other words, consumers did not see the movies actually recommended from the system. Also, this recommendation system may not work well with products other than movies. Thus, it is important to note that recommendation systems need particular calibration for specific product/customer types.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

A Cross-Sectional Study on Fatigue and Self-Reported Physical Symptoms of Vinylhouse Farmers (비닐하우스 농작업자의 피로도와 주관적 신체증상에 관한 연구)

  • Lim, Gyung-Soon;Kim, Chung-Nam
    • Journal of agricultural medicine and community health
    • /
    • v.28 no.2
    • /
    • pp.15-29
    • /
    • 2003
  • Objectives: This study was done to find out fatigue and self-reported physical symptoms of Vinylhouse farmers. The results of this study could be used as a basic data to develop health promotion program for Vinylhouse farmers who are suffering from fatigue and physical symptoms. Methods: The 166 respondents, who were working in Vinylhouse and were living in a remoted area where the primary health post located, were participated in this study. Thirty: 30 items of self-reported fatigue scale was used to evaluate the farmers fatigue level which made by Japanese industrial and hygenic association(1988). Twenty four: 24 items of index used by researcher for self-reported physical symptoms was from Lee In Bae's(1999) modified Index which was originated from Cornell Medical Index(1949). Another questionnaires used in this study were developed by researcher through related documents. Results: The results of this study were as follows; Fatigue scores were high in accordance with women(t=-2.212, p<0.05), worse recognized health state(F=20.610, p<.001), lack of sleeping hours(F=3.937, p<0.05), eat irregularly(t=-3.883, p<0.001), don't take a bath after application of chemical(t=-2.950, p<0.01), working time per a day(F=5.633, p<0.01) & working time per a day in Vinylhouse(F=5.247, p<0.01) were long. Subjective physical symptoms were high in accordance with women(t=-3.176, p<0.01), worse recognized health state(F=35.335, p<0.001), and low education(F=3.467, p<0.05). eat irregularly(t=-3.384, p<0.01), alcohol drinking(t=-2.389, p<0.05). When farmers don't take a bath after application of chemical show high(t=-3.188, p<0.01). As a result, the factors affecting to Vinylhouse worker's health were irregular diet habit, scarce exercise, lack of proper rest, symptoms oriented from Vinylhouse work in contaminated environment with high temperature and humidity. Conclusions: Based on this study, health promotion program is necessary for Vinylhouse workers. Also, the development of continuously practical strategy of healthy life style including exercise and comprehensive health promotion program considered the country's social and cultural background are needed.

  • PDF

Interpretation of Praying Letter and Estimation of Production Period on Samsaebulhoedo at Yongjusa Temple (용주사(龍珠寺) <삼세불회도(三世佛會圖)>의 축원문(祝願文) 해석(解釋)과 제작시기(製作時期) 추정(推定))

  • Kang, Kwan-shik
    • MISULJARYO - National Museum of Korea Art Journal
    • /
    • v.96
    • /
    • pp.155-180
    • /
    • 2019
  • Samsaebulhoedo(三世佛會圖) at Yongjusa Temple(龍珠寺), regarded as a monumental masterpiece consisting of different elements such as Confucian and Buddhist ideas, palace academy garden and Buddhist artist styles, unique traditional and western painting styles, is one of the representative works that symbolically illustrate the development and innovation of painting in the late Joseon dynasty. However, the absence of painting inscriptions raised persistent controversy over the past half century among researchers as to the matters of estimating its production period, identifying the original author and analyzing style characteristics. In the end, the work failed to gain recognitions commensurate with its historical significance and value. It is the particularly vital issue in that estimating the production period of the existing masterpiece is the beginning of all other discussions. However, this issue has caused the ensuing debates since all details are difficult to be interpreted to a concise form due to a number of different records on painters and mixture of traditional buddhist painting styles used by buddhist painters and innovative western styles used by ordinary painters. Contrary to other ordinary Buddhist paintings, this painting, Samsaebulhoedo, has a praying letter for the royal establishment at the center of the main altar. It should be noted that regarding this painting, its original version-His Royal Highness King, Her Majesty, His Royal Crown Prince主上殿下, 王妃殿下, 世子邸下-was erased and instead added Her Love Majesty慈宮邸下 in front of Her Majesty. This praying letter can be assumed as one of the significant and objective evidence for estimating its production period. The new argument of the late 19th century production focused on this praying letter, and proposed that King Sunjo was then the first-born son when Yongjusa Temple was built in 1790 and it was not until January 1, 1800 that he was ascended to the Crown Prince. In this light, the existing praying letter with the eulogistic title-Crown Prince世子-should be considered revised after his ascension to the throne. Styles and icons bore some resemblance to Samsaebulhoedo at Cheongryongsa Temple or Bongeunsa Temple portrayed by Buddhist painters in the late 19th century. Therefore, the remaining Samsaebulhoedo should be depicted by them in the same period as western styles were introduced in Buddhist painting in later days. Following extensive investigations, praying letters in Buddhist paintings in the late 19th century show that it was usual to record specification such as class, birth date and family name of people during the dynasty at the point of producing Buddhist paintings. It is easy to find that those who passed away decades ago cannot be revised to use eulogistic titles as seen by the praying letters in Samsaebulhoedo at Yongju Temple. As "His Royal Highness King, Her Majesty, His Royal Crown Prince" was generally used around 1790 regardless of the presence of first-born son or Crown Prince, it was rather natural to write the eulogistic title "His Royal Crown Prince" in the praying letter of Samsaebulhoedo. Contrary to ordinary royal hierarchy, Her Love Majesty was placed in front of Her Majesty. Based on this, the praying letter was assumed to be revised since King Jeongjo placed royal status of Hyegyeonggung before the Queen, which was an exceptional case during King Jeongjo's reign, due to unusual relationships among King Jeongjo, Hyegyeonggung and the Queen arising from the death of Crown Prince(思悼世子). At that time, there was a special case of originally writing a formal tripod praying letter, as can be seen from ordinary praying letter in Buddhist paintings, erasing it and adding a special eulogistic title: Her Love Majesty. This indicates that King Jeongjo identified that Hyegyeonggung was erased, and commanded to add it; nevertheless, ceremony leaders of Yongju Temple, built as a palace for holding ceremonies of Hyeonryungwon(顯隆園) are Jeongjo, the son of his father and his wife Hyegyeonggung (Her Love Majesty)(惠慶宮(慈宮)). This revision is believed to be ordered by King Jeongjo on January 17, 1791 when the King paid his first visit to the Hyeonryungwon since the establishment of Hyeonryungwon and Yongju Temple, stopped by Yongju Temple on his way to palace and saw Samsaebulhoedo for the first and last time. As shown above, this letter consisting of special contents and forms can be seen an obvious, objective testament to the original of Samsebulhoedo painted in 1790 when Yongju Temple was built.

The actual aspects of North Korea's 1950s Changgeuk through the Chunhyangjeon in the film Moranbong(1958) and the album Corée Moranbong(1960) (영화 <모란봉>(1958)과 음반 (1960) 수록 <춘향전>을 통해 본 1950년대 북한 창극의 실제적 양상)

  • Song, Mi-Kyoung
    • (The) Research of the performance art and culture
    • /
    • no.43
    • /
    • pp.5-46
    • /
    • 2021
  • The film Moranbong is the product of a trip to North Korea in 1958, when Armangati, Chris Marker, Claude Lantzmann, Francis Lemarck and Jean-Claude Bonardo left at the invitation of Joseon Film. However, for political reasons, the film was not immediately released, and it was not until 2010 that it was rediscovered and received attention. The movie consists of the narratives of Young-ran and Dong-il, set in the Korean War, that are folded into the narratives of Chunhyang and Mongryong in the classic Chunhyangjeon of Joseon. At this time, Joseon's classics are reproduced in the form of the drama Chunhyangjeon, which shares the time zone with the two main characters, and the two narratives are covered in a total of six scenes. There are two layers of middle-story frames in the movie, and if the same narrative is set in North Korea in the 1950s, there is an epic produced by the producers and actors of the Changgeuk Chunhyangjeon and the Changgeuk Chunhyangjeon as a complete work. In the outermost frame of the movie, Dong-il is the main character, but in the inner double frame, Young-ran, who is an actor growing up with the Changgeuk Chunhyangjeon and a character in the Changgeuk Chunhyangjeon, is the center. The following three OST albums are Corée Moranbong released in France in 1960, Musique de corée released in 1970, and 朝鮮の伝統音樂-唱劇 「春香伝」と伝統樂器- released in 1968 in Japan. While Corée Moranbong consists only of the music from the film Moranbong, the two subsequent albums included additional songs collected and recorded by Pyongyang National Broadcasting System. However, there is no information about the movie Moranbong on the album released in Japan. Under the circumstances, it is highly likely that the author of the record label or music commentary has not confirmed the existence of the movie Moranbong, and may have intentionally excluded related contents due to the background of the film's ban on its release. The results of analyzing the detailed scenes of the Changgeuk Chunhyangjeon, Farewell Song, Sipjang-ga, Chundangsigwa, Bakseokti and Prison Song in the movie Moranbong or OST album in the 1950s are as follows. First, the process of establishing the North Korean Changgeuk Chunhyangjeon in the 1950s was confirmed. The play, compiled in 1955 through the Joseon Changgeuk Collection, was settled in the form of a Changgeuk that can be performed in the late 1950s by the Changgeuk Chunhyangjeon between 1956 and 1958. Since the 1960s, Chunhyangjeon has no longer been performed as a traditional pansori-style Changgeuk, so the film Moranbong and the album Corée moranbong are almost the last records to capture the Changgeuk Chunhyangjeon and its music. Second, we confirmed the responses of the actors to the controversy over Takseong in the North Korean creative world in the 1950s. Until 1959, there was a voice of criticism surrounding Takseong and a voice of advocacy that it was also a national characteristic. Shin Woo-sun, who almost eliminated Takseong with clear and high-pitched phrases, air man who changed according to the situation, who chose Takseong but did not actively remove Takseong, Lim So-hyang, who tried to maintain his own tone while accepting some of modern vocalization. Although Cho Sang-sun and Lim So-hyang were also guaranteed roles to continue their voices, the selection/exclusion patterns in the movie Moranbong were linked to the Takseong removal guidelines required by North Korean musicians in the name of Dang and People in the 1950s. Second, Changgeuk actors' response to the controversy over the turbidity of the North Korean Changgeuk community in the 1950s was confirmed. Until 1959, there were voices of criticism and support surrounding Taksung in North Korea. Shin Woo-sun, who showed consistent performance in removing turbidity with clear, high-pitched vocal sounds, Gong Gi-nam, who did not actively remove turbidity depending on the situation, Cho Sang-sun, who accepted some of the vocalization required by the party, while maintaining his original tone. On the other hand, Cho Sang-seon and Lim So-hyang were guaranteed roles to continue their sounds, but the selection/exclusion patterns of Moranbong was independently linked to the guidelines for removing turbidity that the Gugak musicians who crossed to North Korea had been asked for.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

An Epidemiological Study on the Industrial Injuries among Metal Products Manufacturing Workers in Young-Dung-Po, Seoul (일부 금속 및 기계제품 제조업체 근로자들의 산업재해($1980{\sim}1981$)에 관한 조사)

  • Lee, Jung-Hee
    • Journal of Preventive Medicine and Public Health
    • /
    • v.15 no.1
    • /
    • pp.187-196
    • /
    • 1982
  • The followings are the results of the study on industrial accidents occurred at 12 factories manufacturing metal products during the period of 2 years from January 1980 to December 1981 in the area of Yong-Dung-Po in Seoul. The results of the study are as follows: 1. The incidence rate of industrial injuries was 45.7 per 1,000 workers of the sample group and the rate of male (54.0) was three times higher than that of female (17.5). 2. In age groups, the highest rate was observed in the group of under 19 years old with 83.5, while the lowest in the group of 40s. 3. It was found that those who had short term of work experience produced a higher rate of injuries, particularly, the group of workers with less than 1 year of experience showed the highest rate of it as 48.1%. 4. In working time, the highest incidence rate occurred 3 and 7 hours after the beginning of their working showing the rate of 6.0 and 6.1 per 1,000 workers, respectively. 5. The highest incidence rate was observed on Monday as 8.4 per 1,000 workers, and it was 18.3% in aspect of the days of a week. 6. In aspect of the months of a year, the highest incidence was observed on July 1,000 workers and the next was on March as 4.8. These figures account for 11.8% of total occurrence in respective month. as 5. 4 per and 10.5% 7. In causes of injuries, the accident caused by power driven machinery showed the highest rate with 37.5%, the second was due to handling without machinery with 17.2%, and the third was due to falling objects with 14.2%, and striking against objects with 10.2%, and so on. 8. By parts of the body affected, the most injuries 84.3% of them occurred on both upper and lower extremities with the rate of 58.8% for the former and 25.5% for the latter. Fingers were most frequently injured with a rate of 40.3%. Comparing the sites of extremities affected, rate of injuries on the right side was 55.0% and 45.0% on the left side. 9. In the nature of injury, laceration and open wound were the highest with 34. 0%, the next was fracture and dislocation with 31. 9%, and sprain was the third with 8.1%. 10. On the duration of treatment, it lasted less than one month in 68.9% of the injured cases, of which 14.5% of the cases were recovered within 2 weeks, and 54.4% of them were treated more than 2 weeks. And the duration of the treatment tended to be prolonged in larger industries. 11. The ratio of insured accidents to uninsured accidents was 1 to 4.7.

  • PDF