• Title/Summary/Keyword: Technology standard model

Search Result 1,649, Processing Time 0.048 seconds

A Study on Applying Information Framework for BIM Based WBS -Focusing on Civil Construction- (BIM기반의 WBS 구축을 위한 정보프레임워크 도입방안 연구 -토목사업의 적용을 중심으로-)

  • Nam, Jeong-Yong;Jo, Chan-Won;Park, So-Hyun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.11
    • /
    • pp.770-777
    • /
    • 2017
  • Building information modeling (BIM) has been receiving attention as an integrated information model instead of CAD since the 2000s. BIM technology was first used in the architectural field and was later introduced to the civil engineering field. However, the government announced a plan for the application of BIM to 20% of all SOC projects from 2020, so the adoption of BIM technology is expected to accelerate. In order to successfully adopt BIM, a systematic structure should be supported for integrated design information and implementation technology. Also, it is important to establish the relationship between information systems because many complicated factors are intertwined in the construction industry. In this study, we propose a framework for constructing integrated information through identifying the information relations for introducing BIM in the civil engineering industry. We applied this framework to a bridge project to confirm its effectiveness. This study can be applied to the integrated management of the construction process and costs by introduction of a work breakdown structure (WBS) to BIM. In addition, this study is expected to contribute to the adoption of BIM in the civil engineering field through the proposal of information system standardization in this field.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

The Effects of Complex Motor Training on Motor Function and Synaptic Plasticity After Neonatal Binge-like Alcohol Exposure in Rats (복합운동훈련이 신생 흰쥐의 알코올성 소뇌손상 후 운동기능 및 신경연접가소성에 미치는 영향)

  • Lee, Sun-Min;Koo, Hyun-Mo;Kwon, Hyuk-Cheol
    • Physical Therapy Korea
    • /
    • v.12 no.3
    • /
    • pp.56-66
    • /
    • 2005
  • The purposes of this study were to test that complex motor training enhance motor function significantly, to test change in cerebellum, and to test the synaptic plasticity into the immunohistochemistry response of synaptophysin. Using an animal model of fetal alcohol syndrome - which equates peak blood alcohol concentrations across developmental period - the effects of alcohol on body weight during periods were examined. The effect of complex motor training on motor function and synaptic plasticity of rat exposed alcohol on postnatal days 4 through 10 were studied. Newborn rats were assigned to one of two groups: (1) normal group (NG), via artificial rearing to milk formula and (2) alcohol groups (AG), via 4.5 g/kg/day of ethanol in a milk solution. After completion of the treatments, the pups were fostered back to lactating dams, where they were raised in standard cages (two-and three animals per cage) until they were postnatal 48 days. Rats from alcohol group of postnatal treatment then spent 10 days in one of two groups: Alcohol-experimental group was had got complex motor training (learning traverse a set of 6 elevated obstacles) for 4 weeks. The alcohol-control group was not trained. Before consider replacing with "the experiment/study", (avoid using "got" in writing) the rats were examined during four behavioral tests and their body weights were measured, then their coronal sections were processed in rabbit polyclonal antibody synaptophysin. The synaptophysin expression in the cerebellar cortex was investigated using a light microscope. The results of this study were as follows: 1. The alcohol groups contained significantly higher alcohol concentrations than the normal group. 2. The alcohol groups had significantly lower body weights than the normal group. 3. In alcohol groups performed significantly lower than the normal group on the motor behavioral test. 4. In alcohol-control group showed significantly decreased immunohistochemistric response of the synaptophysin in the cerebellar cortex compared to the nomal group. These results suggest that improved motor function induced by complex motor training after postnatal exposure is associated with dynamically altered expression of synaptophysin in cerebellar cortex and that is related with synaptic plasticity. Also, these data can potentially serve as a model for therapeutic intervention.

  • PDF

Joint Rate Control Scheme for Terrestrial Stereoscopic 3DTV Broadcast (스테레오스코픽 3차원 지상파 방송을 위한 합동 비트율 제어 연구)

  • Chang, Yongjun;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.11a
    • /
    • pp.14-17
    • /
    • 2010
  • Following the proliferation of three-dimensional video contents and displays, many terrestrial broadcasting companies prepare for starting stereoscopic 3DTV service. In terrestrial stereoscopic broadcast, it is a difficult task to code and transmit two video sequences while sustaining as high quality as 2DTV broadcast attains due to the limited bandwidth defined by the existing digital TV standards such as ATSC. Thus, a terrestrial 3DTV broadcasting system with heterogeneous video coding systems is considered for terrestrial 3DTV broadcast where the left image and right images are based on MPEG-2 and H.264/AVC, respectively, in order to achieve both high quality broadcasting service and compatibility for the existing 2DTV viewers. Without significant change in the current terrestrial broadcasting systems, we propose a joint rate control scheme for stereoscopic 3DTV service. The proposed joint rate control scheme applies to the MPEG-2 encoder a quadratic rate-quantization model which is adopted in the H.264/AVC. Then the controller is designed for the sum of two bit streams to meet the bandwidth requirement of broadcasting standards while the sum of image distortions is minimized by adjusting quantization parameter computed from the proposed optimization scheme. Besides, we also consider a condition on quality difference between the left and right images in the optimization. Experimental results demonstrate that the proposed bit rate control scheme outperforms the rate control method where each video coding standard uses its own bit rate control algorithm in terms of minimizing the mean image distortion as well as the mean value and the variation of absolute image quality differences.

  • PDF

Three-dimensional Texture Coordinate Coding Using Texture Image Rearrangement (텍스처 영상 재배열을 이용한 삼차원 텍스처 좌표 부호화)

  • Kim, Sung-Yeol;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.36-45
    • /
    • 2006
  • Three-dimensional (3-D) texture coordinates mean the position information of torture segments that are mapped into polygons in a 3-D mesh model. In order to compress texture coordinates, previous works reused the same linear predictor that had already been employed to code geometry data. However, the previous approaches could not carry out linear prediction efficiently since texture coordinates were discontinuous along a coding order. Especially, discontinuities of texture coordinates became more serious in the 3-D mesh model including a non-atlas texture. In this paper, we propose a new scheme to code 3-D texture coordinates using as a texture image rearrangement. The proposed coding scheme first extracts texture segments from a texture. Then, we rearrange the texture segments consecutively along the coding order, and apply a linear prediction to compress texture coordinates. Since the proposed scheme minimizes discontinuities of texture coordinates, we can improve coding efficiency of texture coordinates. Experiment results show that the proposed scheme outperforms the MPEG-4 3DMC standard in terms of coding efficiency.

Development of managerial decision-making support technology model for supporting knowledge intensive consulting process (지식집약형 컨설팅프로세스 지원을 위한 경영의사결정지원 기술모델 개발연구)

  • Kim, Yong Jin;Jin, Seung Hye
    • Journal of Digital Convergence
    • /
    • v.11 no.4
    • /
    • pp.251-258
    • /
    • 2013
  • Recently companies are confronted with a much more sophisticated business environment than before and at the same time have to be able to adapt to rapid changes. Accordingly, the need for selecting among alternatives and managing systematic decision-making has been steadily increasing to respond to a more diverse customer needs and keep up with the fierce competition. In this study, we propose a framework that consist of problem solving procedures and techniques and knowledge structure built on processes to support strategic decision making. and discuss how to utilize simulation tools as the knowledge-based problem solving tools. In addition we discuss how to build and advance the knowledge structure to implement the proposed architecture. Management decision support systems architecture consist of three key factors. The first is Problem Solving Approach which is used as reference. The second is knowledge structure on business processes that includes standard and reference business processes. The third is simulators that are able to generate and analyze alternatives using problem solving techniques and knowledge base. In sum, the proposed framework of decision-making support systems facilitates knowledge-intensive consulting processes to promote the development and application of consulting knowledge and techniques and increase the efficiency of consulting firms and industry.

Model Verification of a Safe Security Authentication Protocol Applicable to RFID System (RFID 시스템에 적용시 안전한 보안인증 프로토콜의 모델검증)

  • Bae, WooSik;Jung, SukYong;Han, KunHee
    • Journal of Digital Convergence
    • /
    • v.11 no.4
    • /
    • pp.221-227
    • /
    • 2013
  • RFID is an automatic identification technology that can control a range of information via IC chips and radio communication. Also known as electronic tags, smart tags or electronic labels, RFID technology enables embedding the overall process from production to sales in an ultra-small IC chip and tracking down such information using radio frequencies. Currently, RFID-based application and development is in progress in such fields as health care, national defense, logistics and security. RFID structure consists of a reader that reads tag information, a tag that provides information and the database that manages data. Yet, the wireless section between the reader and the tag is vulnerable to security issues. To sort out the vulnerability, studies on security protocols have been conducted actively. However, due to difficulties in implementation, most suggestions are concerned with theorem proving, which is prone to vulnerability found by other investigators later on, ending up in many troubles with applicability in practice. To experimentally test the security of the protocol proposed here, the formal verification tool, CasperFDR was used. To sum up, the proposed protocol was found to be secure against diverse attacks. That is, the proposed protocol meets the safety standard against new types of attacks and ensures security when applied to real tags in the future.

Creation of Consistent Burn Wounds: A Rat Model

  • Cai, Elijah Zhengyang;Ang, Chuan Han;Raju, Ashvin;Tan, Kong Bing;Hing, Eileen Chor Hoong;Loo, Yihua;Wong, Yong Chiat;Lee, Hanjing;Lim, Jane;Moochhala, Shabbir M.;Hauser, Charlotte A.E.;Lim, Thiam Chye
    • Archives of Plastic Surgery
    • /
    • v.41 no.4
    • /
    • pp.317-324
    • /
    • 2014
  • Background Burn infliction techniques are poorly described in rat models. An accurate study can only be achieved with wounds that are uniform in size and depth. We describe a simple reproducible method for creating consistent burn wounds in rats. Methods Ten male Sprague-Dawley rats were anesthetized and dorsum shaved. A 100 g cylindrical stainless-steel rod (1 cm diameter) was heated to $100^{\circ}C$ in boiling water. Temperature was monitored using a thermocouple. We performed two consecutive toe-pinch tests on different limbs to assess the depth of sedation. Burn infliction was limited to the loin. The skin was pulled upwards, away from the underlying viscera, creating a flat surface. The rod rested on its own weight for 5, 10, and 20 seconds at three different sites on each rat. Wounds were evaluated for size, morphology and depth. Results Average wound size was $0.9957cm^2$ (standard deviation [SD] 0.1845) (n=30). Wounds created with duration of 5 seconds were pale, with an indistinct margin of erythema. Wounds of 10 and 20 seconds were well-defined, uniformly brown with a rim of erythema. Average depths of tissue damage were 1.30 mm (SD 0.424), 2.35 mm (SD 0.071), and 2.60 mm (SD 0.283) for duration of 5, 10, 20 seconds respectively. Burn duration of 5 seconds resulted in full-thickness damage. Burn duration of 10 seconds and 20 seconds resulted in full-thickness damage, involving subjacent skeletal muscle. Conclusions This is a simple reproducible method for creating burn wounds consistent in size and depth in a rat burn model.

Economic Impact Analysis of Hydrogen Energy Deployment Applying Dynamic CGE Model (동태 CGE 모형을 활용한 수소에너지 보급의 경제적 영향 추정)

  • Bae, Jeong-Hwan;Cho, Gyeong-Lyeob
    • Environmental and Resource Economics Review
    • /
    • v.16 no.2
    • /
    • pp.275-311
    • /
    • 2007
  • Hydrogen energy is emphasized as a substitutable energy of carbon-based energy system in the future, since it is non-depletable and clean energy. Long term vision of Korean government on the national energy system is to promote hydrogen energy by 15% of final energy demand until 2040. This study analyzes economic impacts of hydrogen energy development employing a dynamic CGE model for Korea. Frontier technology such as hydrogen energy is featured as slow diffusion at the initial stage due to the learning effect and energy complementarity. Without government intervention, hydrogen energy would be produced upto 6.5% of final energy demand until 2040. However, if government subsidizes sales price of hydrogen energy by 10%, 20%, and 30%, share of hydrogen energy would increase 9.2%, 15.2%, and 37.7% of final energy demand. This result shows that the slow diffusion problem of hydrogen energy as frontier technology could be figured out by market incentive policy. On the other hand, production levels of transportation sector would increase while growth rate of oil and electricity sectors would decline. Household consumption would be affected negatively since increase of consumption due to the price decrease would be overwhelmed by income reduction owing to the increase of tax. Overall, GDP would not decrease or increase significantly since total production, investment, and export would increase even if household consumption declines.

  • PDF