• Title/Summary/Keyword: Two-time scale model

Search Result 339, Processing Time 0.033 seconds

Perceptional Change of a New Product, DMB Phone

  • Kim, Ju-Young;Ko, Deok-Im
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.59-88
    • /
    • 2008
  • Digital Convergence means integration between industry, technology, and contents, and in marketing, it usually comes with creation of new types of product and service under the base of digital technology as digitalization progress in electro-communication industries including telecommunication, home appliance, and computer industries. One can see digital convergence not only in instruments such as PC, AV appliances, cellular phone, but also in contents, network, service that are required in production, modification, distribution, re-production of information. Convergence in contents started around 1990. Convergence in network and service begins as broadcasting and telecommunication integrates and DMB(digital multimedia broadcasting), born in May, 2005 is the symbolic icon in this trend. There are some positive and negative expectations about DMB. The reason why two opposite expectations exist is that DMB does not come out from customer's need but from technology development. Therefore, customers might have hard time to interpret the real meaning of DMB. Time is quite critical to a high tech product, like DMB because another product with same function from different technology can replace the existing product within short period of time. If DMB does not positioning well to customer's mind quickly, another products like Wibro, IPTV, or HSPDA could replace it before it even spreads out. Therefore, positioning strategy is critical for success of DMB product. To make correct positioning strategy, one needs to understand how consumer interprets DMB and how consumer's interpretation can be changed via communication strategy. In this study, we try to investigate how consumer perceives a new product, like DMB and how AD strategy change consumer's perception. More specifically, the paper segment consumers into sub-groups based on their DMB perceptions and compare their characteristics in order to understand how they perceive DMB. And, expose them different printed ADs that have messages guiding consumer think DMB in specific ways, either cellular phone or personal TV. Research Question 1: Segment consumers according to perceptions about DMB and compare characteristics of segmentations. Research Question 2: Compare perceptions about DMB after AD that induces categorization of DMB in direction for each segment. If one understand and predict a direction in which consumer perceive a new product, firm can select target customers easily. We segment consumers according to their perception and analyze characteristics in order to find some variables that can influence perceptions, like prior experience, usage, or habit. And then, marketing people can use this variables to identify target customers and predict their perceptions. If one knows how customer's perception is changed via AD message, communication strategy could be constructed properly. Specially, information from segmented customers helps to develop efficient AD strategy for segment who has prior perception. Research framework consists of two measurements and one treatment, O1 X O2. First observation is for collecting information about consumer's perception and their characteristics. Based on first observation, the paper segment consumers into two groups, one group perceives DMB similar to Cellular phone and the other group perceives DMB similar to TV. And compare characteristics of two segments in order to find reason why they perceive DMB differently. Next, we expose two kinds of AD to subjects. One AD describes DMB as Cellular phone and the other Ad describes DMB as personal TV. When two ADs are exposed to subjects, consumers don't know their prior perception of DMB, in other words, which subject belongs 'similar-to-Cellular phone' segment or 'similar-to-TV' segment? However, we analyze the AD's effect differently for each segment. In research design, final observation is for investigating AD effect. Perception before AD is compared with perception after AD. Comparisons are made for each segment and for each AD. For the segment who perceives DMB similar to TV, AD that describes DMB as cellular phone could change the prior perception. And AD that describes DMB as personal TV, could enforce the prior perception. For data collection, subjects are selected from undergraduate students because they have basic knowledge about most digital equipments and have open attitude about a new product and media. Total number of subjects is 240. In order to measure perception about DMB, we use indirect measurement, comparison with other similar digital products. To select similar digital products, we pre-survey students and then finally select PDA, Car-TV, Cellular Phone, MP3 player, TV, and PSP. Quasi experiment is done at several classes under instructor's allowance. After brief introduction, prior knowledge, awareness, and usage about DMB as well as other digital instruments is asked and their similarities and perceived characteristics are measured. And then, two kinds of manipulated color-printed AD are distributed and similarities and perceived characteristics for DMB are re-measured. Finally purchase intension, AD attitude, manipulation check, and demographic variables are asked. Subjects are given small gift for participation. Stimuli are color-printed advertising. Their actual size is A4 and made after several pre-test from AD professionals and students. As results, consumers are segmented into two subgroups based on their perceptions of DMB. Similarity measure between DMB and cellular phone and similarity measure between DMB and TV are used to classify consumers. If subject whose first measure is less than the second measure, she is classified into segment A and segment A is characterized as they perceive DMB like TV. Otherwise, they are classified as segment B, who perceives DMB like cellular phone. Discriminant analysis on these groups with their characteristics of usage and attitude shows that Segment A knows much about DMB and uses a lot of digital instrument. Segment B, who thinks DMB as cellular phone doesn't know well about DMB and not familiar with other digital instruments. So, consumers with higher knowledge perceive DMB similar to TV because launching DMB advertising lead consumer think DMB as TV. Consumers with less interest on digital products don't know well about DMB AD and then think DMB as cellular phone. In order to investigate perceptions of DMB as well as other digital instruments, we apply Proxscal analysis, Multidimensional Scaling technique at SPSS statistical package. At first step, subjects are presented 21 pairs of 7 digital instruments and evaluate similarity judgments on 7 point scale. And for each segment, their similarity judgments are averaged and similarity matrix is made. Secondly, Proxscal analysis of segment A and B are done. At third stage, get similarity judgment between DMB and other digital instruments after AD exposure. Lastly, similarity judgments of group A-1, A-2, B-1, and B-2 are named as 'after DMB' and put them into matrix made at the first stage. Then apply Proxscal analysis on these matrixes and check the positional difference of DMB and after DMB. The results show that map of segment A, who perceives DMB similar as TV, shows that DMB position closer to TV than to Cellular phone as expected. Map of segment B, who perceive DMB similar as cellular phone shows that DMB position closer to Cellular phone than to TV as expected. Stress value and R-square is acceptable. And, change results after stimuli, manipulated Advertising show that AD makes DMB perception bent toward Cellular phone when Cellular phone-like AD is exposed, and that DMB positioning move towards Car-TV which is more personalized one when TV-like AD is exposed. It is true for both segment, A and B, consistently. Furthermore, the paper apply correspondence analysis to the same data and find almost the same results. The paper answers two main research questions. The first one is that perception about a new product is made mainly from prior experience. And the second one is that AD is effective in changing and enforcing perception. In addition to above, we extend perception change to purchase intention. Purchase intention is high when AD enforces original perception. AD that shows DMB like TV makes worst intention. This paper has limitations and issues to be pursed in near future. Methodologically, current methodology can't provide statistical test on the perceptual change, since classical MDS models, like Proxscal and correspondence analysis are not probability models. So, a new probability MDS model for testing hypothesis about configuration needs to be developed. Next, advertising message needs to be developed more rigorously from theoretical and managerial perspective. Also experimental procedure could be improved for more realistic data collection. For example, web-based experiment and real product stimuli and multimedia presentation could be employed. Or, one can display products together in simulated shop. In addition, demand and social desirability threats of internal validity could influence on the results. In order to handle the threats, results of the model-intended advertising and other "pseudo" advertising could be compared. Furthermore, one can try various level of innovativeness in order to check whether it make any different results (cf. Moon 2006). In addition, if one can create hypothetical product that is really innovative and new for research, it helps to make a vacant impression status and then to study how to form impression in more rigorous way.

  • PDF

A study on the factors to affect the career success among workers with disabilities (지체장애근로자의 직업성공 요인에 관한 연구)

  • Lee, Dal-Yob
    • 한국사회복지학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.185-216
    • /
    • 2003
  • This study was aimed at investigating important factors influencing career success among regular workers. The current researcher scrutinized the degree to which variables and factors affect the career success and occupational turnover rates of the research participants. At the same tune, two hypothetical path models established by the researcher were examined using linear multiple regression methods and the LISREL. After examining the differences among the factors of career success, a comparison was made between the disabled worker group and the non-disabled worker group. A questionnaire using the 5-point Likert scale was distributed to a group of 374 workers with disabilities and 463 workers without disabilities. For the data analysis purpose, the structural equation model, factor analysis, correlation analysis, and multiple regression analysis were carried out. The results of this study ran be summarized as follows. First, the results of factor analysis showed important categories of conceptual themes of career success. The initial conceptual factor model did not accord with the empirical one. A three-factorial model revealed categories of personal, family, and organizational factor respectively. The personal factor was composed of the self-esteem and self-efficiency. The family factor was consisted of the multi-roles stress and the number of children. Finally, the organizational factor was composed of the capacity for utilizing resources, networking, and the frequency of mentoring. In addition, the total 10 sub areas of career success were divided by two important aspects; the subjective career success and the objective career success. Second, both research participant groups seemed to be influenced by their occupational types. However, all predictive variables excluding the wage rate and the average length of work years had significant impact on job success for the disabled work group, while all the variables excluding the frequency of advice and length of working years had significant impact on job success for the non-disabled worker group. Third, the turnover rate was significantly influenced by the age and the experience of turnover of the research participants. However, the number of co-workers was the strongest predictive variable for the worker group with disabilities, but the occupation choice variable for the worker group without disabilities. For the disabled worker group, the turnover rate was differently influenced by the type of occupation, the length of working years, while multi-role stress and the average working years at the time of turnover for the worker group without disabilities. Fifth, as a result of verifying the hypothetical path model, it showed that the first model was somewhat proper and could predict the career success on both research participant groups. In the second model, the Chi-square, the degree of freedom (($x^2=64.950$, df=61, P=0.341), and the adjusted Goodness of Fit Index (AGFI) were .964, and the Comparative Fit Index (CFI) were .997, and the Root Mean Squared Residual (RMR) was respectively. .038. The model was best fitted and could predict the career success more highly because the goodness of fit index in the whole models was within the allowed range. In conclusion, the following research implications can be suggested. First, the occupational type of research participants was one of the most important variables to predict the career success for both research participant groups. It means that people with disabilities require human development services including education. They need to improve themselves in this knowledge-based society. Furthermore, for maintaining the career success, people with disabilities should be approached by considering the subjective career success aspects including wages and the promotion opportunities than the objective career success aspects.

  • PDF

A Study on Training Dataset Configuration for Deep Learning Based Image Matching of Multi-sensor VHR Satellite Images (다중센서 고해상도 위성영상의 딥러닝 기반 영상매칭을 위한 학습자료 구성에 관한 연구)

  • Kang, Wonbin;Jung, Minyoung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1505-1514
    • /
    • 2022
  • Image matching is a crucial preprocessing step for effective utilization of multi-temporal and multi-sensor very high resolution (VHR) satellite images. Deep learning (DL) method which is attracting widespread interest has proven to be an efficient approach to measure the similarity between image pairs in quick and accurate manner by extracting complex and detailed features from satellite images. However, Image matching of VHR satellite images remains challenging due to limitations of DL models in which the results are depending on the quantity and quality of training dataset, as well as the difficulty of creating training dataset with VHR satellite images. Therefore, this study examines the feasibility of DL-based method in matching pair extraction which is the most time-consuming process during image registration. This paper also aims to analyze factors that affect the accuracy based on the configuration of training dataset, when developing training dataset from existing multi-sensor VHR image database with bias for DL-based image matching. For this purpose, the generated training dataset were composed of correct matching pairs and incorrect matching pairs by assigning true and false labels to image pairs extracted using a grid-based Scale Invariant Feature Transform (SIFT) algorithm for a total of 12 multi-temporal and multi-sensor VHR images. The Siamese convolutional neural network (SCNN), proposed for matching pair extraction on constructed training dataset, proceeds with model learning and measures similarities by passing two images in parallel to the two identical convolutional neural network structures. The results from this study confirm that data acquired from VHR satellite image database can be used as DL training dataset and indicate the potential to improve efficiency of the matching process by appropriate configuration of multi-sensor images. DL-based image matching techniques using multi-sensor VHR satellite images are expected to replace existing manual-based feature extraction methods based on its stable performance, thus further develop into an integrated DL-based image registration framework.

A Study of Chinese Peaceful Rise and East Asian Regional Cooperation (중국의 평화적 부상과 동아시아 지역협력 연구)

  • Shong, Il-Ho;Lee, Gye-Young
    • International Commerce and Information Review
    • /
    • v.14 no.3
    • /
    • pp.75-96
    • /
    • 2012
  • China will replace the global governance of the 21st century in 2050. The rise of China provide the Chinese development model to other developing countries. There are positive element and disability element in China's 'peaceful rise' strategy at the same time. Success of the reform and opening up, market liberalization, economic interdependency, economic globalization, stability of ruling power, consolidation of one-party rule and soft power increase are the promotions of peaceful rise. China's rise as a power nation begins by regaining the superpower status in East Asia. East Asia is a lebensraum assuring a continuing growth to China. For this lebensraum, China shows an interest in institutionalization of regional economic cooperation. The core values of ASEAN, namely the mutual respect, harmonious coexistence, co-prosperity, egalitarianism and pluralism are in conform to China's policy of harmonious world and peaceful coexistence. Through this common value the tension in East Asia will be alleviated. By the regional hegemony strategy based on soft power and economic success, China will try to regain the past glorious position. Attaining status as a coordinator of the world rule will be based on the success of the East Asian strategy. Korea and other neighboring countries will be the best beneficiary countries of the China's rise strategy. China's rising strategy will have a profound effect on neighboring countries especially, Korea. The scale of the movement of goods, labor, and capital between the two countries will become much larger than present. Through regional trade agreements, economic interdependency between Korea and China will increase.

  • PDF

Development of the Automatic Fishing System for the Anchovy Scoop nets (I) - The hydraulic winder device for the boom control - (멸치초망 어업의 조업자동화 시스템 개발 (I) -챗대 조작용 유압 권양기 개발-)

  • 박성욱;배봉성;서두옥
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.36 no.3
    • /
    • pp.166-174
    • /
    • 2000
  • Anchovy, EngrauEis japonica scoop nets are used in the coastal of Southern and Cheju of Korea. Especially in the Cheju, the fishing gear of scoop nets consists of upper boom, lower boom, pressing stick and bag net. They are operated by fishing boats of 6 to 10 ton class and 8 persons on board. The booms are controlled by side drum, and the net and pressing stick are hauled by only human power in operating. Therefore this fishery needs to large labor and heavy human power and has much risk. Three kinds of hydraulic winding device which controls two booms was designed and manufactured to reduce heavy labor force of scoop nets, and trial in the sea was carried out to test their performances using the commercial fishing boats of 6 ton class. The proper capacity of hydraulic pump and motor were determined by model test of boom 1/5 scale. The results obtained are as follows, 1. Tension of boom which is being drawn was the strongest and 187.5kgf when the boom's end is in the depth of 4m under the water. 2. The hydraulic motor of the fittest kind of winder has the least leakage per time than the other kinds. 3. In the best type of several winder devices, when the pressure difference was fixed $130kg/^2$ for the safe fishery, the winding velocity of boom line was 2m/sec, is faster 0.48/sec than traditional fishing method and this winder can catch the anchovy of 1.6 tonnage. 4. As a result, the crew were decreased from 8 to 6 and the problem of heavy human power and risk on fishing operation were solved by using the this winder.

  • PDF

Time-lapse crosswell seismic tomography for monitoring injected $CO_2$ in an onshore aquifer, Nagaoka, Japan (일본 Nagaoka의 육상 대수층에 주입된 $CO_2$의 관찰을 위한 시간차 시추공간 탄성파 토모그래피)

  • Saito, Hideki;Nobuoka, Dai;Azuma, Hiroyuki;Xue, Ziqiu;Tanase, Daiji
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.1
    • /
    • pp.30-36
    • /
    • 2006
  • Japan's first pilot-scale $CO_2$ sequestration experiment has been conducted in Nagaoka, where 10400 t of $CO_2$ have been injected in an onshore aquifer at a depth of about 1100 m. Among various measurements conducted at the site for monitoring the injected $CO_2$, we conducted time-lapse crosswell seismic tomography between two observation wells to determine the distribution of $CO_2$ in the aquifer by the change of P-wave velocities. This paper reports the results of the crosswell seismic tomography conducted at the site. The crosswell seismic tomography measurements were carried out three times; once before the injection as a baseline survey, and twice during the injection as monitoring surveys. The velocity tomograms resulting from the monitoring surveys were compared to the baseline survey tomogram, and velocity difference tomograms were generated. The velocity difference tomograms showed that velocity had decreased in a part of the aquifer around the injection well, where the injected $CO_2$ was supposed to be distributed. We also found that the area in which velocity had decreased was expanding in the formation up-dip direction, as increasing amounts of $CO_2$ were injected. The maximum velocity reductions observed were 3.0% after 3200 t of $CO_2$ had been injected, and 3.5% after injection of 6200 t of $CO_2$. Although seismic tomography could map the area of velocity decrease due to $CO_2$ injection, we observed some contradictions with the results of time-lapse sonic logging, and with the geological condition of the cap rock. To investigate these contradictions, we conducted numerical experiments simulating the test site. As a result, we found that part of the velocity distribution displayed in the tomograms was affected by artefacts or ghosts caused by the source-receiver geometry for the crosswell tomography in this particular site. The maximum velocity decrease obtained by tomography (3.5%) was much smaller than that observed by sonic logging (more than 20%). The numerical experiment results showed that only 5.5% velocity reduction might be observed, although the model was given a 20% velocity reduction zone. Judging from this result, the actual velocity reduction can be more than 3.5%, the value we obtained from the field data reconstruction. Further studies are needed to obtain more accurate velocity values that are comparable to those obtained by sonic logging.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

An Experimental Study on Establishing Criteria of Gripping Work in Construction Site (건설 현장 악력 작업안전 기준 설정에 관한 실험적 연구)

  • 손기상;이인홍;최만진;안병준
    • Journal of the Korean Society of Safety
    • /
    • v.10 no.3
    • /
    • pp.81-95
    • /
    • 1995
  • Now, safety assurance in construction sites should be accomplished by its own organization rather than control of the code or government. It is believed that the safety assurance can be considerably improved by a lecture or an education using the existing theories or literatures up to now, but it is thought that fundamental safety assurance we not able to be accomplished without developing safety devices '||'&'||' equipment or taking fundamental measures, based on the result analyzed from workers behaviors. There are various behaviors of the workers showed in construction site, but only tests for hammerusing works such as form, re-bar, stone workers directly related to the grip strength are mainly performed, investigated and measured here for the study. The above works are similar to power grip, 7th picture on seven items which are categorized for hand grip types(Ammermin 1956 ; Jones ; Kobrick 1958). Measurements of grip strength are commonly taken in anthropometric surveys. They are easy to administer but unfortunately it is rather dubious whether they yield any data that are of interest to the engineer. Very fewer controls of tools are grasped and squeesed studies showed very little overall correlation between grip strength and other measures of bodily strength (Laubach, Kromer, and Thordsen 1972), but hammer-using work which is practically progressed in construction site are mainly influenced with grip strength. According to the investigation on work measurement, it is shown that 77% of form worker are using hammer to be related to grip strength. In this study, it is particularly noticed that wearing safety gloves in construction site is required for workers safety but 20% difference between grip strength with safety gloves and without ones are commonly neglected in the site(Fig. 1). Nevertheless, safety operation with consideration of the above 20% difference is not considered in the construction site. Factors of age, kinds of work, working time, with or without safety gloves are in vestigated '||'&'||' collected at the sites for this study. Test, not at each working hour but at 14 : 00 when the almost all of the workers think the most tired, resulting from the questionaires, also when it is shown on the research report has been performed and compared for main kinds of works : form '||'&'||' re-bar work. Tests were performed with both left SE rightand of the workers simultaneously in construction site using Rand Dynamometer(Model 78010, Lafayette Instrument Co., Indiana, U.S.A) by reading grip strength on the gauge while they are pulling, and then by interviewing on their ages, works, experiences and etc., directly. The above tests have been performed for the dates of 15th march-26th May '95 with consideration of site condition. And even if various factors of ambient temperature on the testing date, working condition, individual worker's habit and worker's condition of the previous ate are concerned with the study. Those are considered as constants in this study. Samples are formwork 53, rebar 62, electrician 5, plumber 4, welding 1 from D construction Co., Ltd, ; formwork 12, re-bar 5, electrician 2, from S construction Co., Ltd, , formwork 78, re-bar 18, plumber 31, electrician 13, labor 48, plumber 31, plasterer 15, concrete placer 6, water proof worker 3, maisony 5 from B construction Co., Ltd. As In the previously mentioned, main aspect to be investigated in this study will be from '||'&'||' re-bar work because grip strength will be directly applied to these two kinds of works ; form '||'&'||' re-bar work, eventhough there are total 405 samples taken. It is thought that a frequency of accident occurrence will be mainly two work postures "looking up '||'&'||' looking down" to be mainly sorted, but this factor is not clarified in this study because It will be needed a lot of work more. Tests has been done at possible large scale of horizontally work-extended sites within one hour in order to prevent or decrease errors '||'&'||' discrepancies from time lag of the test. Additionally, the statistical package computer program SPSS PC+has been used for the study.

  • PDF

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Computer Game Addiction and Physical Health of Korea Children : Mediating Effects of Anxiety (아동의 컴퓨터 게임 중독과 신체 증상: 불안의 매개효과)

  • Kwon, Sun-Jung;Kim, Kyo-Heon;Lee, Hong-Seock
    • Survey Research
    • /
    • v.6 no.2
    • /
    • pp.33-50
    • /
    • 2005
  • The purpose of this study was to investigate whether Korean children's being addiction to computer game had an effect on physical health(headache, insomnia, indigestion, cardivascular). We considered both direct effects of that addiction and indirect effects which caused an negative emotion(anxiety). For this study, we collected data from 800 students of grades and 6 in Daejeon, Korea. Among them, we analyzed the data of 572 students[408 boys(71.3%) and 164 girls(28.7%) respectively] who had played computer games for a long period of time(two years or more). Reliability of the scale used on this study was a proper level by $.64{\sim}.91$ and we operated an analysis of structural equation pattern to make relationships of variable causes clear. As a result of examining an index of fitness of each model, it was proved for the fitness of every model(all GFI >.931, all CFI >.939, all NNFI >.929, all RMSEA <.046) to be acceptable. Not only the computer game addiction affected(all ps<.001) directly on physical symptoms(headache ${\beta}$=.211, insomnia ${\beta}$=.289, indigestion ${\beta}$ =.214, cardivascular ${\beta}$=.349), but also it affected(${\beta}$=.458, p<.001) on anxiety. In addition, the effect of anxiety on physical symptoms(headache ${\beta}$=.419, insomnia ${\beta}$=.375, indigestion ${\beta}$=.498, cardivascular ${\beta}$=.328) was significant. As a result of yielding the indirect effects of computer game abuse mediated by anxiety, headache was measured up ${\beta}$=.192, insomnia ${\beta}$=.172, indigestion ${\beta}$ =.228 and cardivascular ${\beta}$=.151. The computer game addiction caused 21% of anxiety and this mediate model proved that the computer game addiction caused 30% of headache, 32% of insomnia, 39% of indigestion, and 34% of cardivascular symptom. As a result of this study, the computer game addiction has a negative effect on physical health both directly and indirectly. Especially, cardivascular was influenced most extremely, then insomnia, indigestion, and headache in order. The implications gleaned from this study were discussed with considerations for future study and practical aspect.

  • PDF