• Title/Summary/Keyword: R&D cooperation system

Search Result 279, Processing Time 0.027 seconds

Research on Text Classification of Research Reports using Korea National Science and Technology Standards Classification Codes (국가 과학기술 표준분류 체계 기반 연구보고서 문서의 자동 분류 연구)

  • Choi, Jong-Yun;Hahn, Hyuk;Jung, Yuchul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.1
    • /
    • pp.169-177
    • /
    • 2020
  • In South Korea, the results of R&D in science and technology are submitted to the National Science and Technology Information Service (NTIS) in reports that have Korea national science and technology standard classification codes (K-NSCC). However, considering there are more than 2000 sub-categories, it is non-trivial to choose correct classification codes without a clear understanding of the K-NSCC. In addition, there are few cases of automatic document classification research based on the K-NSCC, and there are no training data in the public domain. To the best of our knowledge, this study is the first attempt to build a highly performing K-NSCC classification system based on NTIS report meta-information from the last five years (2013-2017). To this end, about 210 mid-level categories were selected, and we conducted preprocessing considering the characteristics of research report metadata. More specifically, we propose a convolutional neural network (CNN) technique using only task names and keywords, which are the most influential fields. The proposed model is compared with several machine learning methods (e.g., the linear support vector classifier, CNN, gated recurrent unit, etc.) that show good performance in text classification, and that have a performance advantage of 1% to 7% based on a top-three F1 score.

A Study on Application of Test Bed for Verification of Realistic Fire Management Technology (실감형 화재관리기술 검증을 위한 테스트베드 적용방안 연구)

  • Choi, Woo-Chul;Kim, Tae-Hoon;Youn, Joon-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.745-753
    • /
    • 2021
  • Recently, a large fire occurred in a multi-use facility used by a large number of citizens, including the vulnerable, resulting in a lot of injuries and damages. Although several pilot studies have been conducted to reduce such incidents, the development of advanced disaster response technology using the latest spatial information and IoT technology is still insufficient. In this study, a pilot test bed is built to demonstrate detailed technologies derived through the first stage of realistic fire management technology research for the development of applied technology in the field. In detail, the building conditions and candidate sites of the test bed were first investigated and analyzed to derive satisfactory conditions and candidate target buildings. A second pilot test bed was then selected, and the necessary sensor and facility infrastructure were built to demonstrate the outcomes. Finally, a scenario was produced for technology verification, and a test bed system was developed. The pilot test bed is expected to contribute to verifying intermediate outcomes of realistic fire management research projects, enhancing the quality of the developed technologies.

Study on the Concentration Estimation Equation of Nitrogen Dioxide using Hyperspectral Sensor (초분광센서를 활용한 이산화질소 농도 추정식에 관한 연구)

  • Jeon, Eui-Ik;Park, Jin-Woo;Lim, Seong-Ha;Kim, Dong-Woo;Yu, Jae-Jin;Son, Seung-Woo;Jeon, Hyung-Jin;Yoon, Jeong-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.6
    • /
    • pp.19-25
    • /
    • 2019
  • The CleanSYS(Clean SYStem) is operated to monitor air pollutants emitted from specific industrial complexes in Korea. So the industrial complexes without the system are directly monitored by the control officers. For efficient monitoring, studies using various sensors have been conducted to monitor air pollutants emitted from industrial complex. In this study, hyperspectral sensors were used to model and verify the equations for estimating the concentration of $NO_2$(nitrogen dioxide) in air pollutants emitted. For development of the equations, spectral radiance were observed for $NO_2$ at various concentrations with different SZA(Solar Zenith Angle), VZA(Viewing Zenith Angle), and RAA(Relative Azimuth Angle). From the observed spectral radiance, the calculated value of the difference between the values of the specific wavelengths was taken as an absorption depth, and the equations were developed using the relationship between the depth and the $NO_2$ concentration. The spectral radiance mixed gas of $NO_2$ and $SO_2$(sulfur dioxide) was used to verify the equations. As a result, the $R^2$(coefficient of determination) and RMSE(Root Mean Square Error) were different from 0.71~0.88 and 72~23 ppm according to the form of the equation, and $R^2$ of the exponential form was the highest among the equations. Depending on the type of the equations, the accuracy of the estimated concentration with varying concentrations is not constant. However, if the equations are advanced in the future, hyperspectral sensors can be used to monitor the $NO_2$ emitted from the industrial complex.

Promotion Strategies for Daegu-Kyungbuk Mobile Cluster: Searching for Alternative Regional Innovation Governance (대구.경북 모바일 클러스터 육성전략: 지역혁신 거버넌스의 대안 모색)

  • Lee, Jeong-Hyop;Kim, Hyung-Joo
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.12 no.4
    • /
    • pp.477-493
    • /
    • 2009
  • This research aims to examine Korean regional innovation governance, find structural problems, and explore alternative strategies of regional innovation governance. Especially the alternative governance was searched through the case study of Daegu-Kyungbuk mobile cluster, of which formulation Samsung is the anchor institution. Regional innovation governance in this research is defined as a policy system to link knowledge generation & diffusion subsystem and knowledge application & exploitation subsystem, and institutional conditions to steer the system. "Social Capital Assessment Tool (SOCAT)" of the World Bank was utilized for the appreciation of cluster governance. The regional innovation governance of Daegu-Kyungbuk mobile cluster is characterized as production networks dominated by one-to-one relationship between Samsung and hardware/software developers, decentralized R&D networks and policy networks with multiple hubs. Major policy agents have not developed networks with local companies, and rare are interactions between the policy agents. Local companies, especially software developers, responded they have had experiences to cooperate for local problem solving and shared their community goal, however, the degree of trust in major local project leaders is not high. Local hardware/software developers with core technologies need to be cooperative to develop similar technologies or products in Daegu-Kyungbuk mobile cluster. Regional administrative actors, such as the City of Daegu and Kyungsangbuk-do, and diverse innovation-related institutes should build cooperative environment where diverse project-based cooperation units are incessantly created, taken apart, and recreated.

  • PDF

A Comparative Study on the Traditional Medicine Policies between Korea and China: Focused on the Second Korean Medicine Development Plan and the 12.5 Traditional Chinese Medicine Development Plan (한국과 중국의 전통의학 정책 비교: 제2차 한의약육성발전계획 및 중의약사업발전 12.5규획 중심)

  • Ko, Chang-Ryong;Ku, Nam-Pyong;Seol, Sung-Soo
    • Journal of Korea Technology Innovation Society
    • /
    • v.17 no.2
    • /
    • pp.421-447
    • /
    • 2014
  • Traditional medicine has been integrated into the national health system in many countries such as Korea, China, Taiwan, etc. Korea and China are most representative among them. The purpose of this study is to compare the policies on traditional medicine in Korea and China focusing on where it came from and where it is headed. In this regard, the study suggested the first analysis tool in the world for analyzing the policy of traditional medicine. The results of the study are as follows: First of all, the development process of Korean Medicine (KM) and Traditional Chinese Medicine (TCM) shows the same pattern, that is, both are influenced by its own national policies. Secondly, the difference between the two countries is due to the gap in the development status or the different aspects in national heath system. TCM is more vitalized in health system and has larger category, and stays ahead in globalization compared to KM. TCM covers Chinese medicine, integrative medicine and ethno-medicine. Korea emphasizes the role of KM in the declining birthrate and aging society, and promotes the overseas patient invitation strategy. China, on the other hand, establishes the medical system for emergency medical treatment and preventive treatment of major diseases and promotes overseas expansion of TCM service. In addition, Korea stressed the safety and distribution of herbal medicine, and China emphasizes production technologies. Korea has a strong medical device industry along with the government's fostering policy; however, in China, medical devices are in the R&D stage yet. Even though both countries promotes the drug development from natural products, Korea focuses on developing herbal cosmetics in application industry, but China shows weakness in policies on application industry. China shores up the foundation for culture and theory of traditional medicine, while Korea doesn't have related policy. Korea places emphasis on promoting collaboration with international organizations and medical volunteer programs, whereas China is more interested in mutual cooperation and real trade with other countries.

Analysis and Selection of Microsatellites Markers for Individual Traceability System in Hanwoo (한우 생산이력제에 활용 가능한 Microsatellite의 분석과 선발)

  • Lim, H.T.;Min, H.S.;Moon, W.G.;Lee, J.B.;Kim, J.H.;Cho, I.C.;Lee, H.K.;Lee, Y.W.;Lee, J.G.;Jeon, J.T.
    • Journal of Animal Science and Technology
    • /
    • v.47 no.4
    • /
    • pp.491-500
    • /
    • 2005
  • To test applicability to the Hanwoo traceability system, twenty microsatellite markers were selected and analyzed. MSA, CERVUS, FSTAT, GENEPOP, API_CALC and PHYLIP software was employed serially to estimate heterozygosity, polymorphic information content, F-statistics, identity probability, exclusion probability and genetic distance. Eleven microsatellite markers(TGLA53, TGLA227, ETH185, TGLA122, BM4305, INRA23, ILSTS013, BMS1747, BM2113, BL1009, and ETH3) were selected based on their high heterozygosity values. Identity probability using these markers is one hundred times higher than when using StockMakersTM of Applied Biosystems. This indicates the selected microsatellite markers are appropriate and effective for use in the Hanwoo traceability system. Additionally, estimates of DA genetic distance and pairwise-FST can be utilized to identify genetic relationships between adjacent farms.

USN's Efforts to Rebuild its Combat Power in an Era of Great Power Competition (강대국 간의 경쟁시대와 미 해군의 증강 노력)

  • Jung, Ho-Sub
    • Strategy21
    • /
    • s.44
    • /
    • pp.5-27
    • /
    • 2018
  • The purpose of this paper is to look at USN's efforts to rebuild its combat power in the face of a reemergence of great powers competition, and to propose some recommendations for the ROKN. In addition to the plan to augment its fleet towards a 355-ships capacity, the USN is pursuing to improve exponentially combat lethality(quality) of its existing fleet by means of innovative science and technology. In other words, the USN is putting its utmost efforts to improve readiness of current forces, to modernize maintenance facilities such as naval shipyards, and simultaneously to invest in innovative weapons system R&D for the future. After all, the USN seems to pursue innovations in advanced military Science & Technology as the best way to ensure continued supremacy in the coming strategic competition between great powers. However, it is to be seen whether the USN can smoothly continue these efforts to rebuild combat strength vis-a-vis its new competition peers, namely China and Russian navy, due to the stringent fiscal constraints, originating, among others, from the 2011 Budget Control Act effective yet. Then, it seems to be China's unilateral and assertive behaviors to expand its maritime jurisdiction in the South China Sea that drives the USN's rebuild-up efforts of the future. Now, some changes began to be perceived in the basic framework of the hitherto regional maritime security, in the name of declining sea control of the USN as well as withering maritime order based on international law and norms. However, the ROK-US alliance system is the most excellent security mechanism upon which the ROK, as a trading power, depends for its survival and prosperity. In addition, as denuclearization of North Korea seems to take significant time and efforts to accomplish in the years to come, nuclear umbrella and extended deterrence by the US is still noting but indispensible for the security of the ROK. In this connection, the naval cooperation between ROKN and USN should be seen and strengthened as the most important deterrents to North Korean nuclear and missile threats, as well as to potential maritime provocation by neighboring countries. Based on these observations, this paper argues that the ROK Navy should try to expand its own deterrent capability by pursuing selective technological innovation in order to prevent this country's destiny from being dictated by other powers. In doing so, however, it may be too risky for the ROK to pursue the emerging, disruptive innovative technologies such as rail gun, hypersonic weapon... etc., due to enormous budget, time, and very thin chance of success. This paper recommends, therefore, to carefully select and extensively invest on the most cost-effective technological innovations, suitable in the operational environments of the ROK. In particular, this paper stresses the following six areas as most potential naval innovations for the ROK Navy: long range precision strike; air and missile defense at sea; ASW with various unmanned maritime system (UMS) such as USV, UUV based on advanced hydraulic acoustic sensor (Sonar) technology; network; digitalization for the use of AI and big data; and nuclear-powered attack submarines as a strategic deterrent.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

A Study on the Compatibility of Korean Temperature Guidelines for Stockpile Material Environmental Test (저장물자 환경시험을 위한 한국적 온도기준 적합성 연구)

  • Lee, Il Ro;Byun, Kisik;Cho, Sung-Yong;Kim, Kyung Pil;Park, Jae Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.8
    • /
    • pp.187-194
    • /
    • 2020
  • The T&E (Test and Evaluation) results were applied for a judgment basis to decide the developmental process of system engineering for efficient weapon system R&D (Research and Development). During the OT&E (Operational Test and Evaluation) and DT&E (Development Test and Evaluation), an environmental test is essential for weapon system development owing to their highly exposed operational conditions. Based on the MIL-STD-810, MIL-HDBK-310, and AECTP 200, the ROK armed forces recommended operating temperatures for the ROK weapon system and applied this to the DT&E and OT&E. This study examined the compatibility of Korean temperature guidelines for stockpile material considering recent climate change. Moreover, this study analyzed the data from hourly measured temperatures on 101 observatories during 60 years, from 1960 to 2020, and percentage (0.5%, 1%, 5%, and 10%) and the 𝜎 (3𝜎, 2𝜎, and 1𝜎) frequency of occurrence on rigorous hot (August) and cold (January) periods, respectively. The results indicate that the highest temperature was 41℃, and the 0.5% frequency of occurrence was 37.0℃. In the case of the cold period, the lowest temperature was -32.6℃ and the 0.5% frequency of occurrence was -21.1℃. By considering the previously recommended operating temperature range for a general ground system, -30 ~ 40℃, regional operation probability is recognized 99.999%. Despite the recent abnormal climate change from global warming, the Korean temperature guidelines are compatible with the stockpile material environmental test.