• Title/Summary/Keyword: 함수의 개념

Search Result 824, Processing Time 0.032 seconds

Analysis on the Influence of Moment Distribution Shape on the Effective Moment of Inertia of Simply Supported Reinforced Concrete Beams (철근콘크리트 단순보의 유효 단면2차모멘트에 대한 모멘트 분포 형상의 영향 분석)

  • Park, Mi-Young;Kim, Sang-Sik;Lee, Seung-Bae;Kim, Chang-Hyuk;Kim, Kang-Su
    • Journal of the Korea Concrete Institute
    • /
    • v.21 no.1
    • /
    • pp.93-103
    • /
    • 2009
  • The concept of the effective moment of inertia has been generally used for the deflection estimation of reinforced concrete flexural members. The KCI design code adopted Branson's equation for simple calculation of deflection, in which a representative value of the effective moment of inertia is used for the whole length of a member. However, the code equation for the effective moment of inertia was formulated based on the results of beam tests subjected to uniformly distributed loads, which may not effectively account for those of members under different loading conditions. Therefore, this study aimed to verify the influences of moment shapes resulting from different loading patterns by experiments. Six beams were fabricated and tested in this study, where primary variables were concrete compressive strengths and loading distances from supports, and test results were compared to the code equation and other existing approaches. A method utilizing variational analysis for the deflection estimation has been also proposed, which accounts for the influences of moment shapes to the effective moment of inertia. The test results indicated that the effective moment of inertia was somewhat influenced by the moment shape, and that this influence of moment shape to the effective moment of inertia was not captured by the code equation. Compared to the code equation, the proposed method had smaller variation in the ratios of the test results to the estimated values of beam deflections. Therefore, the proposed method is considered to be a good approach to take into account the influence of moment shape for the estimation of beam deflection, however, the differences between test results and estimated deflections show that more researches are still required to improve its accuracy by modifying the shape function of deflection.

Development of Intelligent ATP System Using Genetic Algorithm (유전 알고리듬을 적용한 지능형 ATP 시스템 개발)

  • Kim, Tai-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.131-145
    • /
    • 2010
  • The framework for making a coordinated decision for large-scale facilities has become an important issue in supply chain(SC) management research. The competitive business environment requires companies to continuously search for the ways to achieve high efficiency and lower operational costs. In the areas of production/distribution planning, many researchers and practitioners have developedand evaluated the deterministic models to coordinate important and interrelated logistic decisions such as capacity management, inventory allocation, and vehicle routing. They initially have investigated the various process of SC separately and later become more interested in such problems encompassing the whole SC system. The accurate quotation of ATP(Available-To-Promise) plays a very important role in enhancing customer satisfaction and fill rate maximization. The complexity for intelligent manufacturing system, which includes all the linkages among procurement, production, and distribution, makes the accurate quotation of ATP be a quite difficult job. In addition to, many researchers assumed ATP model with integer time. However, in industry practices, integer times are very rare and the model developed using integer times is therefore approximating the real system. Various alternative models for an ATP system with time lags have been developed and evaluated. In most cases, these models have assumed that the time lags are integer multiples of a unit time grid. However, integer time lags are very rare in practices, and therefore models developed using integer time lags only approximate real systems. The differences occurring by this approximation frequently result in significant accuracy degradations. To introduce the ATP model with time lags, we first introduce the dynamic production function. Hackman and Leachman's dynamic production function in initiated research directly related to the topic of this paper. They propose a modeling framework for a system with non-integer time lags and show how to apply the framework to a variety of systems including continues time series, manufacturing resource planning and critical path method. Their formulation requires no additional variables or constraints and is capable of representing real world systems more accurately. Previously, to cope with non-integer time lags, they usually model a concerned system either by rounding lags to the nearest integers or by subdividing the time grid to make the lags become integer multiples of the grid. But each approach has a critical weakness: the first approach underestimates, potentially leading to infeasibilities or overestimates lead times, potentially resulting in excessive work-inprocesses. The second approach drastically inflates the problem size. We consider an optimized ATP system with non-integer time lag in supply chain management. We focus on a worldwide headquarter, distribution centers, and manufacturing facilities are globally networked. We develop a mixed integer programming(MIP) model for ATP process, which has the definition of required data flow. The illustrative ATP module shows the proposed system is largely affected inSCM. The system we are concerned is composed of a multiple production facility with multiple products, multiple distribution centers and multiple customers. For the system, we consider an ATP scheduling and capacity allocationproblem. In this study, we proposed the model for the ATP system in SCM using the dynamic production function considering the non-integer time lags. The model is developed under the framework suitable for the non-integer lags and, therefore, is more accurate than the models we usually encounter. We developed intelligent ATP System for this model using genetic algorithm. We focus on a capacitated production planning and capacity allocation problem, develop a mixed integer programming model, and propose an efficient heuristic procedure using an evolutionary system to solve it efficiently. This method makes it possible for the population to reach the approximate solution easily. Moreover, we designed and utilized a representation scheme that allows the proposed models to represent real variables. The proposed regeneration procedures, which evaluate each infeasible chromosome, makes the solutions converge to the optimum quickly.

Development and Application of a Methodologyfor Climate Change Vulnerability Assessment-Sea Level Rise Impact ona Coastal City (기후변화 취약성 평가 방법론의 개발 및 적용 해수면 상승을 중심으로)

  • Yoo, Ga-Young;Park, Sung-Woo;Chung, Dong-Ki;Kang, Ho-Jeong;Hwang, Jin-Hwan
    • Journal of Environmental Policy
    • /
    • v.9 no.2
    • /
    • pp.185-205
    • /
    • 2010
  • Climate change vulnerability assessment based on local conditions is a prerequisite for establishment of climate change adaptation policies. While some studies have developed a methodology for vulnerability assessment at the national level using statistical data, few attempts, whether domestic or overseas, have been made to develop methods for local vulnerability assessments that are easily applicable to a single city. Accordingly, the objective of this study was to develop a conceptual framework for climate change vulnerability, and then develop a general methodology for assessment at the regional level applied to a single coastal city, Mokpo, in Jeolla province, Korea. We followed the conceptual framework of climate change vulnerability proposed by the IPCC (1996) which consists of "climate exposure," "systemic sensitivity," and "systemic adaptive capacity." "Climate exposure" was designated as sea level rises of 1, 2, 3, 4, and 5 meter(s), allowing for a simple scenario for sea level rises. Should more complex forecasts of sea level rises be required later, the methodology developed herein can be easily scaled and transferred to other projects. Mokpo was chosen as a seaside city on the southwest coast of Korea, where all cities have experienced rising sea levels. Mokpo has experienced the largest sea level increases of all, and is a region where abnormal high tide events have become a significant threat; especially subsequent to the construction of an estuary dam and breakwaters. Sensitivity to sea level rises was measured by the percentage of flooded area for each administrative region within Mokpo evaluated via simulations using GIS techniques. Population density, particularly that of senior citizens, was also factored in. Adaptive capacity was considered from both the "hardware" and "software" aspects. "Hardware" adaptive capacity was incorporated by considering the presence (or lack thereof) of breakwaters and seawalls, as well as their height. "Software" adaptive capacity was measured using a survey method. The survey questionnaire included economic status, awareness of climate change impact and adaptation, governance, and policy, and was distributed to 75 governmental officials working for Mokpo. Vulnerability to sea level rises was assessed by subtracting adaptive capacity from the sensitivity index. Application of the methodology to Mokpo indicated vulnerability was high for seven out of 20 administrative districts. The results of our methodology provides significant policy implications for the development of climate change adaptation policy as follows: 1) regions with high priority for climate change adaptation measures can be selected through a correlation diagram between vulnerabilities and records of previous flood damage, and 2) after review of existing short, mid, and long-term plans or projects in high priority areas, appropriate adaptation measures can be taken as per this study. Future studies should focus on expanding analysis of climate change exposure from sea level rises to other adverse climate related events, including heat waves, torrential rain, and drought etc.

  • PDF

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

Studies on the Physical Properties of Major Tree Barks Grown in Korea -Genus Pinus, Populus and Quercus- (한국산(韓國産) 주요(主要) 수종(樹種) 수피(樹皮)의 이학적(理學的) 성질(性質)에 관(關)한 연구(硏究) -소나무속(屬), 사시나무속(屬), 참나무속(屬)을 중심(中心)으로-)

  • Lee, Hwa Hyoung
    • Journal of Korean Society of Forest Science
    • /
    • v.33 no.1
    • /
    • pp.33-58
    • /
    • 1977
  • A bark comprises about 10 to 20 percents of a typical log by volume, and is generally considered as an unwanted residue rather than a potentially valuable resourses. As the world has been confronted with decreasing forest resources, natural resources pressure dictate that a bark should be a raw material instead of a waste. The utilization of the largely wasted bark of genus Pinus, Quercus, and Populus grown in Korea can be enhanced by learning its physical and mechanical properties. However, the study of tree bark grown in Korea have never been undertaken. In the present paper, an investigative study is carried out on the bark of three genus, eleven species representing not only the major bark trees but major species currently grown in Korea. For each species 20 trees were selected, at Suweon and Kwang-neung areas, on the same basis of the diameter class at the proper harvesting age. One $200cm^2$ segment of bark was obtained from each tree at brest height. Physical properties of bark studied are: bark density, moisture content of green bark (inner-, outer-, and total-bark), fiber saturation point, hysteresis loop, shrinkage, water absorption, specific heat, heat of wetting, thermal conductivity, thermal diffusivity, heat of combustion, and differential thermal analysis. The mechanical properties are studied on bending and compression strength (radial, longitudinal, and tangential). The results may be summarized as follows: 1. The oven-dry specific gravities differ between wood and bark, further more even for a given bark sample, the difference is obersved between inner and outer bark. 2. The oven-dry specific gravity of bark is higher than that of wood. This fact is attributed to the anatomical structure whose characters are manifested by higher content of sieve fiber and sclereids. 3. Except Pinus koraiensis, the oven-dry specific gravity of inner bark is higher than that of outer bark, which results from higher shrinkage of inner bark. 4. The moisture content of bark increases with direct proportion to the composition ratio of sieve components and decreases with higher percent of sclerenchyma and periderm tissues. 5. The possibility of determining fiber saturation point is suggested by the measuring the heat of wetting. With the proposed method, the fiber saturation point of Pinus densiflora lies between 26 and 28%, that of Quercus accutissima ranges from 24 to 28%. These results need be further examined by other methods. 6. Contrary to the behavior of wood, the bark shrinkage is the highest in radial direction and the lowest in longitudinal direction. Quercus serrata and Q. variabilis do not fall in this category. 7. Bark shows the same specific heat as wood, but the heat of wetting of bark is higher than that of wood. In heat conductivity, bark is lower than wood. From the measures of oven-dry specific gravity (${\rho}d$) and moisture fraction specific gravity (${\rho}m$) is devised the following regression equation upon which heat conductivity can be calculated. The calculated heat conductivity of bark is between $0.8{\times}10^{-4}$ and $1.6{\times}10^{-4}cal/cm-sec-deg$. $$K=4.631+11.408{\rho}d+7.628{\rho}m$$ 8. The bark heat diffusivity varies from $8.03{\times}10^{-4}$ to $4.46{\times}10^{-4}cm^2/sec$. From differential thermal analysis, wood shows a higher thermogram than bark under ignition point, but the tendency is reversed above ignition point. 9. The modulus of rupture for static bending strength of bark is proportional to the density of bark which in turn gives the following regression equation. M=243.78X-12.02 The compressive strength of bark is the highest in radial direction, contrary to the behavior of wood, and the compressive strength of longitudinal direction follows the tangential one in decreasing order.

  • PDF

Analytical Method of Partial Standing Wave-Induced Seabed Response in Finite Soil Thickness under Arbitrary Reflection (임의반사율의 부분중복파동장에서 유한두께를 갖는 해저지반 내 지반응답의 해석법)

  • Lee, Kwang-Ho;Kim, Do-Sam;Kim, Kyu-Han;Kim, Dong-Wook;Shin, Bum-Shick
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.26 no.5
    • /
    • pp.300-313
    • /
    • 2014
  • Most analytical solutions for wave-induced soil response have been mainly developed to investigate the influence of the progressive and standing waves on the seabed response in an infinite seabed. This paper presents a new analytical solution to the governing equations considering the wave-induced soil response for the partial standing wave fields with arbitrary reflectivity in a porous seabed of finite thickness, using the effective stress based on Biot's theory (Biot, 1941) and elastic foundation coupled with linear wave theory. The newly developed solution for wave-seabed interaction in seabed of finite depth has wide applicability as an analytical solutions because it can be easily extended to the previous analytical solutions by varying water depth and reflection ratio. For more realistic wave field, the partial standing waves caused by the breakwaters with arbitrary reflectivity are considered. The analytical solutions was verified by comparing with the previous results for a seabed of infinite thickness under the two-dimensional progressive and standing wave fields derived by Yamamoto et al.(1978) and Tsai & Lee(1994). Based on the analytical solutions derived in this study, the influence of water depth and wave period on the characteristics of the seabed response for the progressive, standing and partial standing wave fields in a seabed of finite thickness were carefully examined. The analytical solution shows that the soil response (including pore pressure, shear stress, horizontal and vertical effective stresses) for a seabed of finite thickness is quite different in an infinite seabed. In particular, this study also found that the wave-induced seabed response under the partial wave conditions was reduced compared with the standing wave fields, and depends on the reflection coefficient.

Application of The Semi-Distributed Hydrological Model(TOPMODEL) for Prediction of Discharge at the Deciduous and Coniferous Forest Catchments in Gwangneung, Gyeonggi-do, Republic of Korea (경기도(京畿道) 광릉(光陵)의 활엽수림(闊葉樹林)과 침엽수림(針葉樹林) 유역(流域)의 유출량(流出量) 산정(算定)을 위한 준분포형(準分布型) 수문모형(水文模型)(TOPMODEL)의 적용(適用))

  • Kim, Kyongha;Jeong, Yongho;Park, Jaehyeon
    • Journal of Korean Society of Forest Science
    • /
    • v.90 no.2
    • /
    • pp.197-209
    • /
    • 2001
  • TOPMODEL, semi-distributed hydrological model, is frequently applied to predict the amount of discharge, main flow pathways and water quality in a forested catchment, especially in a spatial dimension. TOPMODEL is a kind of conceptual model, not physical one. The main concept of TOPMODEL is constituted by the topographic index and soil transmissivity. Two components can be used for predicting the surface and subsurface contributing area. This study is conducted for the validation of applicability of TOPMODEL at small forested catchments in Korea. The experimental area is located at Gwangneung forest operated by Korea Forest Research Institute, Gyeonggi-do near Seoul metropolitan. Two study catchments in this area have been working since 1979 ; one is the natural mature deciduous forest(22.0 ha) about 80 years old and the other is the planted young coniferous forest(13.6 ha) about 22 years old. The data collected during the two events in July 1995 and June 2000 at the mature deciduous forest and the three events in July 1995 and 1999, August 2000 at the young coniferous forest were used as the observed data set, respectively. The topographic index was calculated using $10m{\times}10m$ resolution raster digital elevation map(DEM). The distribution of the topographic index ranged from 2.6 to 11.1 at the deciduous and 2.7 to 16.0 at the coniferous catchment. The result of the optimization using the forecasting efficiency as the objective function showed that the model parameter, m and the mean catchment value of surface saturated transmissivity, $lnT_0$ had a high sensitivity. The values of the optimized parameters for m and InT_0 were 0.034 and 0.038; 8.672 and 9.475 at the deciduous and 0.031, 0.032 and 0.033; 5.969, 7.129 and 7.575 at the coniferous catchment, respectively. The forecasting efficiencies resulted from the simulation using the optimized parameter were comparatively high ; 0.958 and 0.909 at the deciduous and 0.825, 0.922 and 0.961 at the coniferous catchment. The observed and simulated hyeto-hydrograph shoed that the time of lag to peak coincided well. Though the total runoff and peakflow of some events showed a discrepancy between the observed and simulated output, TOPMODEL could overall predict a hydrologic output at the estimation error less than 10 %. Therefore, TOPMODEL is useful tool for the prediction of runoff at an ungaged forested catchment in Korea.

  • PDF

Relationship between Steady Flow and Dynamic Rheological Properties for Viscoelastic Polymer Solutions - Examination of the Cox-Merz Rule Using a Nonlinear Strain Measure - (점탄성 고분자 용액의 정상유동특성과 동적 유변학적 성질의 상관관계 -비선헝 스트레인 척도를 사용한 Cox-Merz 법칙의 검증-)

  • 송기원;김대성;장갑식
    • The Korean Journal of Rheology
    • /
    • v.10 no.4
    • /
    • pp.234-246
    • /
    • 1998
  • The objective of this study is to investigate the correlation between steady shear flow (nonlinear behavior) and dynamic viscoelastic (linear behavior) properties for concentrated polymer solutions. Using both an Advanced Rheometic Expansion System(ARES) and a Rheometics Fluids Spectrometer (RFS II), the steady shear flow viscosity and the dynamic viscoelastic properties of concentrated poly(ethylene oxide)(PEO), polyisobutylene(PIB), and polyacrylamide(PAAm) solutions have been measured over a wide range of shear rates and angular frequencies. The validity of some previously proposed relationships was compared with experimentally measured data. In addition, the effect of solution concentration on the applicability of the Cox-Merz rule was examined by comparing the steady flow viscosity and the magnitude of the complex viscosity Finally, the applicability of the Cox-Merz rule was theoretically discussed by introducing a nonlinear strain measure. Main results obtained from this study can be summarized as follows : (1) Among the previously proposed relationships dealt with in this study, the Cox-Merz rule implying the equivalence between the steady flow viscosity and the magnitude of the complex viscosity has the best validity. (2) For polymer solutions with relatively lower concentration, the steady flow viscosity is higher than the complex viscosity. However, such a relation between the two viscosities is reversed for highly concentrated polymer solutions. (3) A nonlinear strain measure is decreased with increasing stran magnitude, after reaching the maximum value in small strain range. This behavior is different from the theoretical prediction demonstrating the shape of a damped oscillatory function. (4) The applicability of the Cox-Merz rule is influenced by the $\beta$ value, which indicates the slope of a nonlinear stain measure (namely, the degree of nonlinearity) at large shear deformations. The Cox-Merz rule shows better applicability as the $\beta$ value becomes smaller.

  • PDF

Corporate Credit Rating based on Bankruptcy Probability Using AdaBoost Algorithm-based Support Vector Machine (AdaBoost 알고리즘기반 SVM을 이용한 부실 확률분포 기반의 기업신용평가)

  • Shin, Taek-Soo;Hong, Tae-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.25-41
    • /
    • 2011
  • Recently, support vector machines (SVMs) are being recognized as competitive tools as compared with other data mining techniques for solving pattern recognition or classification decision problems. Furthermore, many researches, in particular, have proved them more powerful than traditional artificial neural networks (ANNs) (Amendolia et al., 2003; Huang et al., 2004, Huang et al., 2005; Tay and Cao, 2001; Min and Lee, 2005; Shin et al., 2005; Kim, 2003).The classification decision, such as a binary or multi-class decision problem, used by any classifier, i.e. data mining techniques is so cost-sensitive particularly in financial classification problems such as the credit ratings that if the credit ratings are misclassified, a terrible economic loss for investors or financial decision makers may happen. Therefore, it is necessary to convert the outputs of the classifier into wellcalibrated posterior probabilities-based multiclass credit ratings according to the bankruptcy probabilities. However, SVMs basically do not provide such probabilities. So it required to use any method to create the probabilities (Platt, 1999; Drish, 2001). This paper applied AdaBoost algorithm-based support vector machines (SVMs) into a bankruptcy prediction as a binary classification problem for the IT companies in Korea and then performed the multi-class credit ratings of the companies by making a normal distribution shape of posterior bankruptcy probabilities from the loss functions extracted from the SVMs. Our proposed approach also showed that their methods can minimize the misclassification problems by adjusting the credit grade interval ranges on condition that each credit grade for credit loan borrowers has its own credit risk, i.e. bankruptcy probability.

An Exploratory Study on Determinants Affecting R Programming Acceptance (R 프로그래밍 수용 결정 요인에 대한 탐색 연구)

  • Rubianogroot, Jennifer;Namn, Su Hyeon
    • Management & Information Systems Review
    • /
    • v.37 no.1
    • /
    • pp.139-154
    • /
    • 2018
  • R programming is free and open source system associated with a rich and ever-growing set of libraries of functions developed and submitted by independent end-users. It is recognized as a popular tool for handling big data sets and analyzing them. Reflecting these characteristics, R has been gaining popularity from data analysts. However, the antecedents of R technology acceptance has not been studied yet. In this study we identify and investigates cognitive factors contributing to build user acceptance toward R in education environment. We extend the existing technology acceptance model by incorporating social norms and software capability. It was found that the factors of subjective norm, perceived usefulness, ease of use affect positively on the intention of acceptance R programming. In addition, perceived usefulness is related to subjective norms, perceived ease of use, and software capability. The main difference of this research from the previous ones is that the target system is not a stand-alone. In addition, the system is not static in the sense that the system is not a final version. Instead, R system is evolving and open source system. We applied the Technology Acceptance Model (TAM) to the target system which is a platform where diverse applications such as statistical, big data analyses, and visual rendering can be performed. The model presented in this work can be useful for both colleges that plan to invest in new statistical software and for companies that need to pursue future installations of new technologies. In addition, we identified a modified version of the TAM model which is extended by the constructs such as subjective norm and software capability to the original TAM model. However one of the weak aspects that might inhibit the reliability and validity of the model is that small number of sample size.