• Title/Summary/Keyword: Optimization analysis

Search Result 6,117, Processing Time 0.045 seconds

Recent Progress in Air Conditioning and Refrigeration Research : A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2007 (설비공학 분야의 최근 연구 동향 : 2007년 학회지 논문에 대한 종합적 고찰)

  • Han, Hwa-Taik;Shin, Dong-Sin;Choi, Chang-Ho;Lee, Dae-Young;Kim, Seo-Young;Kwon, Yong-Il
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.20 no.12
    • /
    • pp.844-861
    • /
    • 2008
  • The papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering during the year of 2007 have been reviewed. Focus has been put on current status of research in the aspect of heating, cooling, ventilation, sanitation and building environments. The conclusions are as follows. (1) The research trends of fluid engineering have been surveyed as groups of general fluid flow, fluid machinery and piping, etc. New research topics include micro nano fluid, micropump and fuel cell. Traditional CFD was still popular and widely used in research and development. Studies about fans and pumps were performed in the field of fluid machinery. Characteristics of flow and fin shape optimization are studied in the field of piping system. (2) The research works on heat transfer have been reviewed in the field of heat transfer characteristics, heat exchangers, and desiccant cooling systems. The research on heat transfer characteristics includes thermal transport in pulse tubes, high temperature superconductors, ground heat exchangers, fuel cell stacks and ice slurry systems. For the heat 'exchangers, the research on pin-tube heat exchanger, plate heat exchanger, condensers and gas coolers has been cordially implemented. The research works on heat transfer augmenting tubes have been also reported. For the desiccant cooling systems, the studies on the design and operating conditions for desiccant rotors as well as performance index are noticeable. (3) In the field of refrigeration, many papers were presented on the air conditioning system using CO2 as a refrigerant. The issues on the two-stage compression, the oil selection, and the appropriate oil charge were treated. The subjects of alternative refrigerants were also studied steadily. Hydrocarbons, DME and their mixtures were considered and various heat transfer correlations were proposed. (4) Research papers have been reviewed in the field of building facilities by grouping into the researches on heat and cold sources, air conditioning and air cleaning, ventilation and fire research including tunnel ventilation, flow control of piping system, and sound research with drain system. Main focuses have been addressed to the promotion of efficient or effective use of energy, which helps to save energy and results in reduced environmental pollution and operating cost. (5) Studies were mostly focused on analyzing the indoor environment in various spaces like cars, old tombs, machine rooms, and etc. in an architectural environmental field. Moreover, subjects of various fields such as the evaluation of noise, thermal environment, indoor air quality and development of energy analysis program were researched by various methods of survey, simulation, and field experiment.

Integrated Rotary Genetic Analysis Microsystem for Influenza A Virus Detection

  • Jung, Jae Hwan;Park, Byung Hyun;Choi, Seok Jin;Seo, Tae Seok
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2013.08a
    • /
    • pp.88-89
    • /
    • 2013
  • A variety of influenza A viruses from animal hosts are continuously prevalent throughout the world which cause human epidemics resulting millions of human infections and enormous industrial and economic damages. Thus, early diagnosis of such pathogen is of paramount importance for biomedical examination and public healthcare screening. To approach this issue, here we propose a fully integrated Rotary genetic analysis system, called Rotary Genetic Analyzer, for on-site detection of influenza A viruses with high speed. The Rotary Genetic Analyzer is made up of four parts including a disposable microchip, a servo motor for precise and high rate spinning of the chip, thermal blocks for temperature control, and a miniaturized optical fluorescence detector as shown Fig. 1. A thermal block made from duralumin is integrated with a film heater at the bottom and a resistance temperature detector (RTD) in the middle. For the efficient performance of RT-PCR, three thermal blocks are placed on the Rotary stage and the temperature of each block is corresponded to the thermal cycling, namely $95^{\circ}C$ (denature), $58^{\circ}C$ (annealing), and $72^{\circ}C$ (extension). Rotary RT-PCR was performed to amplify the target gene which was monitored by an optical fluorescent detector above the extension block. A disposable microdevice (10 cm diameter) consists of a solid-phase extraction based sample pretreatment unit, bead chamber, and 4 ${\mu}L$ of the PCR chamber as shown Fig. 2. The microchip is fabricated using a patterned polycarbonate (PC) sheet with 1 mm thickness and a PC film with 130 ${\mu}m$ thickness, which layers are thermally bonded at $138^{\circ}C$ using acetone vapour. Silicatreated microglass beads with 150~212 ${\mu}L$ diameter are introduced into the sample pretreatment chambers and held in place by weir structure for construction of solid-phase extraction system. Fig. 3 shows strobed images of sequential loading of three samples. Three samples were loaded into the reservoir simultaneously (Fig. 3A), then the influenza A H3N2 viral RNA sample was loaded at 5000 RPM for 10 sec (Fig. 3B). Washing buffer was followed at 5000 RPM for 5 min (Fig. 3C), and angular frequency was decreased to 100 RPM for siphon priming of PCR cocktail to the channel as shown in Figure 3D. Finally the PCR cocktail was loaded to the bead chamber at 2000 RPM for 10 sec, and then RPM was increased up to 5000 RPM for 1 min to obtain the as much as PCR cocktail containing the RNA template (Fig. 3E). In this system, the wastes from RNA samples and washing buffer were transported to the waste chamber, which is fully filled to the chamber with precise optimization. Then, the PCR cocktail was able to transport to the PCR chamber. Fig. 3F shows the final image of the sample pretreatment. PCR cocktail containing RNA template is successfully isolated from waste. To detect the influenza A H3N2 virus, the purified RNA with PCR cocktail in the PCR chamber was amplified by using performed the RNA capture on the proposed microdevice. The fluorescence images were described in Figure 4A at the 0, 40 cycles. The fluorescence signal (40 cycle) was drastically increased confirming the influenza A H3N2 virus. The real-time profiles were successfully obtained using the optical fluorescence detector as shown in Figure 4B. The Rotary PCR and off-chip PCR were compared with same amount of influenza A H3N2 virus. The Ct value of Rotary PCR was smaller than the off-chip PCR without contamination. The whole process of the sample pretreatment and RT-PCR could be accomplished in 30 min on the fully integrated Rotary Genetic Analyzer system. We have demonstrated a fully integrated and portable Rotary Genetic Analyzer for detection of the gene expression of influenza A virus, which has 'Sample-in-answer-out' capability including sample pretreatment, rotary amplification, and optical detection. Target gene amplification was real-time monitored using the integrated Rotary Genetic Analyzer system.

  • PDF

Optimization Process Models of Gas Combined Cycle CHP Using Renewable Energy Hybrid System in Industrial Complex (산업단지 내 CHP Hybrid System 최적화 모델에 관한 연구)

  • Oh, Kwang Min;Kim, Lae Hyun
    • Journal of Energy Engineering
    • /
    • v.28 no.3
    • /
    • pp.65-79
    • /
    • 2019
  • The study attempted to estimate the optimal facility capacity by combining renewable energy sources that can be connected with gas CHP in industrial complexes. In particular, we reviewed industrial complexes subject to energy use plan from 2013 to 2016. Although the regional designation was excluded, Sejong industrial complex, which has a fuel usage of 38 thousand TOE annually and a high heat density of $92.6Gcal/km^2{\cdot}h$, was selected for research. And we analyzed the optimal operation model of CHP Hybrid System linking fuel cell and photovoltaic power generation using HOMER Pro, a renewable energy hybrid system economic analysis program. In addition, in order to improve the reliability of the research by analyzing not only the heat demand but also the heat demand patterns for the dominant sectors in the thermal energy, the main supply energy source of CHP, the economic benefits were added to compare the relative benefits. As a result, the total indirect heat demand of Sejong industrial complex under construction was 378,282 Gcal per year, of which paper industry accounted for 77.7%, which is 293,754 Gcal per year. For the entire industrial complex indirect heat demand, a single CHP has an optimal capacity of 30,000 kW. In this case, CHP shares 275,707 Gcal and 72.8% of heat production, while peak load boiler PLB shares 103,240 Gcal and 27.2%. In the CHP, fuel cell, and photovoltaic combinations, the optimum capacity is 30,000 kW, 5,000 kW, and 1,980 kW, respectively. At this time, CHP shared 275,940 Gcal, 72.8%, fuel cell 12,390 Gcal, 3.3%, and PLB 90,620 Gcal, 23.9%. The CHP capacity was not reduced because an uneconomical alternative was found that required excessive operation of the PLB for insufficient heat production resulting from the CHP capacity reduction. On the other hand, in terms of indirect heat demand for the paper industry, which is the dominant industry, the optimal capacity of CHP, fuel cell, and photovoltaic combination is 25,000 kW, 5,000 kW, and 2,000 kW. The heat production was analyzed to be CHP 225,053 Gcal, 76.5%, fuel cell 11,215 Gcal, 3.8%, PLB 58,012 Gcal, 19.7%. However, the economic analysis results of the current electricity market and gas market confirm that the return on investment is impossible. However, we confirmed that the CHP Hybrid System, which combines CHP, fuel cell, and solar power, can improve management conditions of about KRW 9.3 billion annually for a single CHP system.

Calculation of Soil Moisture and Evapotranspiration of KLDAS applying Ground-Observed Meteorological Data (지상관측 기상자료를 적용한 KLDAS(Korea Land Data Assimilation System)의 토양수분·증발산량 산출)

  • Park, Gwangha;Kye, Changwoo;Lee, Kyungtae;Yu, Wansik;Hwang, Eui-ho;Kang, Dohyuk
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1611-1623
    • /
    • 2021
  • Thisstudy demonstratessoil moisture and evapotranspiration performance using Korea Land Data Assimilation System (KLDAS) under Korea Land Information System (KLIS). Spin-up was repeated 8 times in 2018. In addition, low-resolution and high-resolution meteorological data were generated using meteorological data observed by Korea Meteorological Administration (KMA), Rural Development Administration (RDA), Korea Rural Community Corporation (KRC), Korea Hydro & Nuclear Power Co.,Ltd. (KHNP), Korea Water Resources Corporation (K-water), and Ministry of Environment (ME), and applied to KLDAS. And, to confirm the degree of accuracy improvement of Korea Low spatial resolution (hereafter, K-Low; 0.125°) and Korea High spatial resolution (hereafter, K-High; 0.01°), soil moisture and evapotranspiration to which Modern-Era Retrospective analysis for Research and Applications, version 2 (MERRA-2) and ASOS-Spatial (ASOS-S) used in the previous study were applied were evaluated together. As a result, optimization of the initial boundary condition requires 2 time (58 point), 3 time (6 point), and 6 time (3 point) spin-up for soil moisture. In the case of evapotranspiration, 1 time (58 point) and 2 time (58 point) spin-ups are required. In the case of soil moisture to which MERRA-2, ASOS-S, K-Low, and K-High were applied, the mean of R2 were 0.615, 0.601, 0.594, and 0.664, respectively, and in the case of evapotranspiration, the mean of R2 were 0.531, 0.495, 0.656, and 0.677, respectively, indicating the accuracy of K-High was rated as the highest. The accuracy of KLDAS can be improved by securing a large number of ground observation data through the results of this study and generating high-resolution grid-type meteorological data. However, if the meteorological condition at each point is not sufficiently taken into account when converting the point data into a grid, the accuracy is rather lowered. For a further study, it is expected that higher quality data can be produced by generating and applying grid-type meteorological data using the parameter setting of IDW or other interpolation techniques.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

Are you a Machine or Human?: The Effects of Human-likeness on Consumer Anthropomorphism Depending on Construal Level (Are you a Machine or Human?: 소셜 로봇의 인간 유사성과 소비자 해석수준이 의인화에 미치는 영향)

  • Lee, Junsik;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.129-149
    • /
    • 2021
  • Recently, interest in social robots that can socially interact with humans is increasing. Thanks to the development of ICT technology, social robots have become easier to provide personalized services and emotional connection to individuals, and the role of social robots is drawing attention as a means to solve modern social problems and the resulting decline in the quality of individual lives. Along with the interest in social robots, the spread of social robots is also increasing significantly. Many companies are introducing robot products to the market to target various target markets, but so far there is no clear trend leading the market. Accordingly, there are more and more attempts to differentiate robots through the design of social robots. In particular, anthropomorphism has been studied importantly in social robot design, and many approaches have been attempted to anthropomorphize social robots to produce positive effects. However, there is a lack of research that systematically describes the mechanism by which anthropomorphism for social robots is formed. Most of the existing studies have focused on verifying the positive effects of the anthropomorphism of social robots on consumers. In addition, the formation of anthropomorphism of social robots may vary depending on the individual's motivation or temperament, but there are not many studies examining this. A vague understanding of anthropomorphism makes it difficult to derive design optimal points for shaping the anthropomorphism of social robots. The purpose of this study is to verify the mechanism by which the anthropomorphism of social robots is formed. This study confirmed the effect of the human-likeness of social robots(Within-subjects) and the construal level of consumers(Between-subjects) on the formation of anthropomorphism through an experimental study of 3×2 mixed design. Research hypotheses on the mechanism by which anthropomorphism is formed were presented, and the hypotheses were verified by analyzing data from a sample of 206 people. The first hypothesis in this study is that the higher the human-likeness of the robot, the higher the level of anthropomorphism for the robot. Hypothesis 1 was supported by a one-way repeated measures ANOVA and a post hoc test. The second hypothesis in this study is that depending on the construal level of consumers, the effect of human-likeness on the level of anthropomorphism will be different. First, this study predicts that the difference in the level of anthropomorphism as human-likeness increases will be greater under high construal condition than under low construal condition.Second, If the robot has no human-likeness, there will be no difference in the level of anthropomorphism according to the construal level. Thirdly,If the robot has low human-likeness, the low construal level condition will make the robot more anthropomorphic than the high construal level condition. Finally, If the robot has high human-likeness, the high construal levelcondition will make the robot more anthropomorphic than the low construal level condition. We performed two-way repeated measures ANOVA to test these hypotheses, and confirmed that the interaction effect of human-likeness and construal level was significant. Further analysis to specifically confirm interaction effect has also provided results in support of our hypotheses. The analysis shows that the human-likeness of the robot increases the level of anthropomorphism of social robots, and the effect of human-likeness on anthropomorphism varies depending on the construal level of consumers. This study has implications in that it explains the mechanism by which anthropomorphism is formed by considering the human-likeness, which is the design attribute of social robots, and the construal level of consumers, which is the way of thinking of individuals. We expect to use the findings of this study as the basis for design optimization for the formation of anthropomorphism in social robots.

Development of System for Real-Time Object Recognition and Matching using Deep Learning at Simulated Lunar Surface Environment (딥러닝 기반 달 표면 모사 환경 실시간 객체 인식 및 매칭 시스템 개발)

  • Jong-Ho Na;Jun-Ho Gong;Su-Deuk Lee;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.4
    • /
    • pp.281-298
    • /
    • 2023
  • Continuous research efforts are being devoted to unmanned mobile platforms for lunar exploration. There is an ongoing demand for real-time information processing to accurately determine the positioning and mapping of areas of interest on the lunar surface. To apply deep learning processing and analysis techniques to practical rovers, research on software integration and optimization is imperative. In this study, a foundational investigation has been conducted on real-time analysis of virtual lunar base construction site images, aimed at automatically quantifying spatial information of key objects. This study involved transitioning from an existing region-based object recognition algorithm to a boundary box-based algorithm, thus enhancing object recognition accuracy and inference speed. To facilitate extensive data-based object matching training, the Batch Hard Triplet Mining technique was introduced, and research was conducted to optimize both training and inference processes. Furthermore, an improved software system for object recognition and identical object matching was integrated, accompanied by the development of visualization software for the automatic matching of identical objects within input images. Leveraging satellite simulative captured video data for training objects and moving object-captured video data for inference, training and inference for identical object matching were successfully executed. The outcomes of this research suggest the feasibility of implementing 3D spatial information based on continuous-capture video data of mobile platforms and utilizing it for positioning objects within regions of interest. As a result, these findings are expected to contribute to the integration of an automated on-site system for video-based construction monitoring and control of significant target objects within future lunar base construction sites.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.