• Title/Summary/Keyword: Technology standard model

Search Result 1,649, Processing Time 0.029 seconds

The Roles of Intermediaries in Clusters: The Thai Experiences in High-tech and Community-based Clusters

  • Intarakumnerd, Patarapong
    • Journal of Technology Innovation
    • /
    • v.13 no.2
    • /
    • pp.23-43
    • /
    • 2005
  • Industrial clusters are geographical concentrations of interconnected companies, specialised suppliers, service providers, firms in related industries, and associated institutions (for example, universities, standard agencies, and trade associations) that combine to create new products and/or services in specific lines of business. At present, the concept of industrial cluster becomes very popular worldwide, policy makers at national, regional and local levels and business people in both forerunner and latecomer countries are keen to implement the cluster concept as an economic development model. Though understanding of clusters and related promoting policies varies from one place to another, the underlying benefits of clusters from collective learning and knowledge spillovers between participating actors strongly attract the attention of these people. In Thailand, a latecomer country in terms of technological catching up, the cluster concept has been used as a means to rectify weakness and fragmentation of its innovation systems. The present Thai government aspires to apply the concept to promote both high-tech manufacturing clusters, services clusters and community-based clusters at the grass-root level. This paper analyses three very different clusters in terms of technological sophistication and business objectives, i.e., hard disk drive, software and chili paste. It portrays their significant actors, the extent of interaction among them and the evolution of the clusters. Though are very dissimilar, common characteristics attributed to qualified success are found. Main driving forces of the three clusters are cluster intermediaries. Forms of these organizations are different from a government research and technology organization (RTO), an industrial association, to a self-organised community-based organization. However, they perform similar functions of stimulating information and knowledge sharing, and building trust among participating firms/individuals in the clusters. Literature in the cluster studies argues that government policies need to be cluster specific. In this case, the best way to design and implement cluster-specific policies is through working closely with intermediaries and strengthening their institutional especially in linking member firms/individuals to other actors in clusters such as universities, government R&D institutes, and financial institutions.

  • PDF

3D-QSAR Studies on Chemical Features of 3-(benzo[d]oxazol-2-yl)pyridine-2-amines in the External Region of c-Met Active Site

  • Lee, Joo Yun;Lee, Kwangho;Kim, Hyoung Rae;Chae, Chong Hak
    • Bulletin of the Korean Chemical Society
    • /
    • v.34 no.12
    • /
    • pp.3553-3558
    • /
    • 2013
  • The three dimensional-quantitative structure activity relationship (3D-QSAR) studies on chemical features of pyridine-2-amines in the external region of c-Met active site (ER chemical features of pyridine-2-amines) were conducted by docking, comparative molecular field analysis (CoMFA), and topomer CoMFA methods. The CoMFA model obtained the partial least-squares (PLS) statistical results, cross-validated correlation coefficient ($q^2$) of 0.703, non cross-validated correlation coefficient ($r^2$) of 0.947 with standard error of estimate (SEE) of 0.23 and the topomer CoMFA obtained $q^2$ of 0.803, $r^2$ of 0.940, and SEE of 0.24. Further, the test set was applied to validate predictive abilities of models, where the predictive $r^2$ ($r{^2}_{pred}$) for CoMFA and topomer CoMFA models were 0.746 and 0.608, respectively. Each contribution of ER chemical features of pyridine-2-amines to the inhibitory potency showed correlation coefficients, $r^2$ of 0.670 and 0.913 for two core parts, 3-(benzo[d]oxazol-2-yl)pyridine-2-amine and 3-(1-(2,6-dichloro-3-fluorophenyl)ethoxy) pyridine-2-amine, respectively, with corresponding experimental $pIC_{50}$.

Power consumption estimation of active RFID system using simulation (시뮬레이션을 이용한 능동형 RFID 시스템의 소비 전력 예측)

  • Lee, Moon-Hyoung;Lee, Hyun-Kyo;Lim, Kyoung-Hee;Lee, Kang-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.8
    • /
    • pp.1569-1580
    • /
    • 2016
  • For the 2.4 GHz active RFID to be successful in the market, one of the requirements is the increased battery life. However, currently we do not have any accurate power consumption estimation method. In this study we develop a simulation model, which can be used to estimate power consumption of tag accurately. Six different simulation models are proposed depending on collision algorithm and query command method. To improve estimation accuracy, we classify tag operating modes as the wake-up receive, UHF receive, sleep timer, tag response, and sleep modes. Power consumption and operating time are identified according to the tag operating mode. Query command for simplifying collection and ack command procedure and newly developed collision control algorithm are used in the simulation. Other performance measures such as throughput, recognition time for multi-tags, tag recognition rate including power consumption are compared with those from the current standard ISO/IEC 18000-7.

- A Case Study on OOP Component Build-up for Reliability of MRP System - (MRP 시스템의 신뢰성을 위한 객체재향 컴포넌트 개발 사례)

  • Seo Jang Hoon
    • Journal of the Korea Safety Management & Science
    • /
    • v.6 no.3
    • /
    • pp.211-235
    • /
    • 2004
  • Component based design is perceived as a key technology for developing advanced real-time systems in a both cost- and time effective manner. Already today, component based design is seen to increase software productivity, by reducing the amount of effort needed to update and maintain systems, by packaging solutions for re-use, and easing distribution. Nowdays, a thousand and one companies in If(Information Technology) industry such as Sl(System Integration) and software development companies, regardless of scale of their projects, has spent their time and endeavor on developing reusable business logic. The component software is the outcome of software developers effort on overcoming this problem; the component software is the way propositioned for quick and easy implementation of software. In addition, there has been lots of investment on researching and developing the software development methodology and leading If companies has released new standard technologies to help with component development. For instance, COM(Component Object Model) and DCOM(Distribute COM) technology of Microsoft and EJB(Enterprise Java Beans) technology of Sun Microsystems has turned up. Component-Based Development (CBD) has not redeemed its promises of reuse and flexibility. Reuse is inhibited due to problems such as component retrieval, architectural mismatch, and application specificness. Component-based systems are flexible in the sense that components can be replaced and fine-tuned, but only under the assumption that the software architecture remains stable during the system's lifetime. In this paper, It suggest that systems composed of components should be generated from functional and nonfunctional requirements rather than being composed out of existing or newly developed components. about implements and accomplishes the modeling for the Product Control component development by applying CCD(Contract-Collaboration Diagram), one of component development methodology, to MRP(Material Requirement Planning) System

Implementation of Zero-Ripple Line Current Induction Cooker using Class-D Current-Source Resonant Inverter with Parallel-Load Network Parameters under Large-Signal Excitation

  • Ekkaravarodome, Chainarin;Thounthong, Phatiphat;Jirasereeamornkul, Kamon
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.3
    • /
    • pp.1251-1264
    • /
    • 2018
  • The systematic and effective design method of a Class-D current-source resonant inverter for use in an induction cooker with zero-ripple line current is presented. The design procedure is based on the principle of the Class-D current-source resonant inverter with a simplified load network model that is a parallel equivalent circuit. An induction load characterization is obtained from a large-signal excitation test-bench based on parallel load network, which is the key to an accurate design for the induction cooker system. Accordingly, the proposed scheme provides a systematic, precise, and feasible solution than the existing design method based on series-parallel load network under low-signal excitation. Moreover, a zero-ripple condition of utility-line input current is naturally preserved without any extra circuit or control. Meanwhile, a differential-mode input electromagnetic interference (EMI) filter can be eliminated, high power quality in utility-line can be obtained, and a standard-recovery diode of bridge-rectifier can be employed. The step-by-step design procedure explained with design example. The devices stress and power loss analysis of induction cooker with a parallel load network under large-signal excitation are described. A 2,500-W laboratory prototype was developed for $220-V_{rms}/50-Hz$ utility-line to verify the theoretical analysis. An efficiency of the prototype is 96% at full load.

Post-fire test of precast steel reinforced concrete stub columns under eccentric compression

  • Yang, Yong;Xue, Yicong;Yu, Yunlong;Gong, Zhichao
    • Steel and Composite Structures
    • /
    • v.33 no.1
    • /
    • pp.111-122
    • /
    • 2019
  • This paper presents an experimental work on the post-fire behavior of two kinds of innovative composite stub columns under eccentric compression. The partially precast steel reinforced concrete (PPSRC) column is composed of a precast outer-part cast using steel fiber reinforced reactive powder concrete (RPC) and a cast-in-place inner-part cast using conventional concrete. Based on the PPSRC column, the hollow precast steel reinforced concrete (HPSRC) column has a hollow column core. With the aim to investigate the post-fire performance of these composite columns, six stub column specimens, including three HPSRC stub columns and three PPSRC stub columns, were exposed to the ISO834 standard fire. Then, the cooling specimens and a control specimen unexposed to fire were eccentrically loaded to explore the residual capacity. The test parameters include the section shape, concrete strength of inner-part, eccentricity ratio and heating time. The test results indicated that the precast RPC shell could effectively confine the steel shape and longitudinal reinforcements after fire, and the PPSRC stub columns experienced lower core temperature in fire and exhibited higher post-fire residual strength as compared with the HPSRC stub columns due to the insulating effect of core concrete. The residual capacity increased with the increasing of inner concrete strength and with the decreasing of heating time and load eccentricity. Based on the test results, a FEA model was established to simulate the temperature field of test specimens, and the predicted results agreed well with the test results.

Improved prediction of soil liquefaction susceptibility using ensemble learning algorithms

  • Satyam Tiwari;Sarat K. Das;Madhumita Mohanty;Prakhar
    • Geomechanics and Engineering
    • /
    • v.37 no.5
    • /
    • pp.475-498
    • /
    • 2024
  • The prediction of the susceptibility of soil to liquefaction using a limited set of parameters, particularly when dealing with highly unbalanced databases is a challenging problem. The current study focuses on different ensemble learning classification algorithms using highly unbalanced databases of results from in-situ tests; standard penetration test (SPT), shear wave velocity (Vs) test, and cone penetration test (CPT). The input parameters for these datasets consist of earthquake intensity parameters, strong ground motion parameters, and in-situ soil testing parameters. liquefaction index serving as the binary output parameter. After a rigorous comparison with existing literature, extreme gradient boosting (XGBoost), bagging, and random forest (RF) emerge as the most efficient models for liquefaction instance classification across different datasets. Notably, for SPT and Vs-based models, XGBoost exhibits superior performance, followed by Light gradient boosting machine (LightGBM) and Bagging, while for CPT-based models, Bagging ranks highest, followed by Gradient boosting and random forest, with CPT-based models demonstrating lower Gmean(error), rendering them preferable for soil liquefaction susceptibility prediction. Key parameters influencing model performance include internal friction angle of soil (ϕ) and percentage of fines less than 75 µ (F75) for SPT and Vs data and normalized average cone tip resistance (qc) and peak horizontal ground acceleration (amax) for CPT data. It was also observed that the addition of Vs measurement to SPT data increased the efficiency of the prediction in comparison to only SPT data. Furthermore, to enhance usability, a graphical user interface (GUI) for seamless classification operations based on provided input parameters was proposed.

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

Simulation of the High Frequency Hyperthermia for Tumor Treatment (종양치료용 고주파 열치료 인체적용 시뮬레이션)

  • Lee, Kang-Yeon;Jung, Byung-Geun;Kim, Ji-won;Park, Jeong-Suk;Jeong, Byeong-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.3
    • /
    • pp.257-263
    • /
    • 2018
  • Hyperthermia supplies RF high-frequency energy above 1MHz to the tumor tissue through the electrodes. And the temperature of the tumor tissue is increased to $42^{\circ}C$ or more to cause thermal necrosis. A mathematical model can be derived a human body model for absorption and transmission of electromagnetic energy in the human model and It is possible to evaluate the distribution of temperature fields in biological tissues. In this paper, we build the human model based on the adult standard model of the geometric shape of the 3D model and use the FVM code. It is assumed that Joule heat is supplied to the anatomical model to simulate the magnetic field induced by the external electrode and the temperature distribution was analyzed for 0-1,200 seconds. As a result of the simulation, it was confirmed that the transferred energy progressively penetrates from the edge of the electrode to the pulmonary tumors and from the skin surface to the subcutaneous layer.

An Analysis on the Economic Impacts of the Bio-gas Supply Sector (바이오가스 공급 확대의 경제적 파급효과 분석)

  • Baek, Min-Ji;Kim, Ho-Young;Yoo, Seung-Hoon
    • Journal of Energy Engineering
    • /
    • v.23 no.2
    • /
    • pp.74-82
    • /
    • 2014
  • The government is planning to expand the bio-gas supply as a method for mitigating greenhouse gas emissions to deal with climate change. By means of a policy instrument, the government is considering an introduction of the Renewable Fuel Standard (RFS) whose targets include bio-gas. This paper attempts to look into the economic effects of expanding the bio-gas supply by applying an input-output (I-O) analysis using a 2011 I-O table. The bio-gas supply sector consists of liquefied petroleum gas supply sector and city gas supply sector, based on the tenets of introducing the RFS. The production-inducing effect, value-added creation effect, and employment-inducing effect of the bio-gas sector are analyzed. The supply shortage effect and the price pervasive effect are also investigated. The results show that the production or investment of 1.0 won in the bio-gas supply sector induces the production of 1.0539 won and the value-added of 0.1998 won in the national economy. Moreover, the production or investment of 1.0 billion won, supply shortage of 1.0 won, and a price increase of 10.0% in the bio-gas supply sector touch off the employment of 0.5279 person, 1.6229 won, and an increase in overall price level by 0.0183%, respectively.