• Title/Summary/Keyword: Parallel Simulation

Search Result 1,731, Processing Time 0.028 seconds

The Influential Factor Analysis in the Technology Valuation of The Agri-Food Industry and the Simulation-Based Valuation Analysis (농식품 산업의 기술평가 영향요인 분석과 시뮬레이션 기반 기술평가 비교)

  • Kim, Sang-gook;Jun, Seung-pyo;Park, Hyun-woo
    • Journal of Technology Innovation
    • /
    • v.24 no.4
    • /
    • pp.277-307
    • /
    • 2016
  • Since 2011, DCF(Discounted Cash Flow) method has been used initiatively for valuating R&D technology assets in the agricultural food industry and recently technology valuation based on royalties comparison among technology transfer transactions has been also carried out in parallel when evaluating the technology assets such as new seed development technologies. Since the DCF method which has been known until now has many input variables to be estimated, sophisticated estimation has been demanded at the time of technology valuation. In addition, considering more similar trading cases when applying sales transaction comparison or industry norm method based on information of technology transfer royalty, it is an important issue that should be taken into account in the same way in the Agri-Food industry. The main input variables used for technology valuation in the Agri-Food industry are life cycle of technology asset, the financial information related to the Agri-Food industry, discount rate, and technology contribution rate. The latest infrastructure building and data updating related to technology valuation has been carried out on a regular basis in the evaluation organization of the Agri-Food segment. This study verifies the key variables that give the most important impact on the results for the existing technology valuation in the Agri-Food industry and clarifies the difference between the existing valuation result and the outcome by referring the support information that is derived through the latest input information applied in DCF method. In addition, while presenting the scheme to complement fragment information which the latest input data just influence result of technology valuation, we tried to perform comparative analysis between the existing valuation results and the evaluated outcome after the latest of reference data for making a decision the input values to be estimated in DCF. To perform these analyzes, it was first selected the representative cases evaluated past in the Agri-Food industry, applied a sensitivity analysis for input variables based on these selected cases, and then executed a simulation analysis utilizing the key input variables derived from sensitivity analysis. The results of this study is to provide the information which there are the need for modernization of the data related to the input variables that are utilized during valuating technology assets in the Agri-Food sector and for building the infrastructure of the key input variables in DCF. Therefore it is expected to provide more fruitful information about the results of valuation.

Empirical Study of Simple Grade Facilities Gap Utilizing Micro Simulation Analysis (Micro Simulation을 활용한 도시부 단순입체시설 분합류 구간간격에 관한 실증연구)

  • Kim, Young-Il;Rho, Jeong-Hyun;Kim, Tae-Ho;Park, Jun-Tae
    • International Journal of Highway Engineering
    • /
    • v.14 no.2
    • /
    • pp.63-72
    • /
    • 2012
  • Current analysis method drives an irrationality a road, signal operation and cause confusion of road such as weaving, bottleneck being not including main traffic flow in analysis subject. Therefore, this research develops analysis method of simple grade facilities to grasp target equipment relationship effect as virtue process to grasp effect of simple grade facilities in city and there is the purpose to apply optimum space of analysis intersection. In this paper, get at effect of simple grade facilities in urban area, as well as, develop new analysis method of simple grade facilities and adapt optimal interval of intersection point. New method of this paper reasonably estimated to optimal interval of the traffic flow(diverge area, merge area). As research result, analysis method to present in this research could clarify vague part of existing analysis method and presume reasonable result. Optimal interval of diverge and merge area with facilities was appeared more then 65m from the main line and more then 45m from the frontage road. Meaning of this paper as follow. First, the effect of simple grade facilities estimate. as consider optimal interval of simple grade facilities in urban can plan efficiently operation planning of road and signal in connection with nearby intersection. Second, new method then previous methods. planner of transportation easily access due to run parallel with existing method. Third, new method is contained through traffic volumes. the existing method did not reflect one. and this new method reduce error to the minimum. when analysis of intersection and link. Fourth, using the new method propose improvement plan with road operation and signal operation.

Contrast Media in Abdominal Computed Tomography: Optimization of Delivery Methods

  • Joon Koo Han;Byung Ihn Choi;Ah Young Kim;Soo Jung Kim
    • Korean Journal of Radiology
    • /
    • v.2 no.1
    • /
    • pp.28-36
    • /
    • 2001
  • Objective: To provide a systematic overview of the effects of various parameters on contrast enhancement within the same population, an animal experiment as well as a computer-aided simulation study was performed. Materials and Methods: In an animal experiment, single-level dynamic CT through the liver was performed at 5-second intervals just after the injection of contrast medium for 3 minutes. Combinations of three different amounts (1, 2, 3 mL/kg), concentrations (150, 200, 300 mgI/mL), and injection rates (0.5, 1, 2 mL/sec) were used. The CT number of the aorta (A), portal vein (P) and liver (L) was measured in each image, and time-attenuation curves for A, P and L were thus obtained. The degree of maximum enhancement (Imax) and time to reach peak enhancement (Tmax) of A, P and L were determined, and times to equilibrium (Teq) were analyzed. In the computed-aided simulation model, a program based on the amount, flow, and diffusion coefficient of body fluid in various compartments of the human body was designed. The input variables were the concentrations, volumes and injection rates of the contrast media used. The program generated the time-attenuation curves of A, P and L, as well as liver-to-hepatocellular carcinoma (HCC) contrast curves. On each curve, we calculated and plotted the optimal temporal window (time period above the lower threshold, which in this experiment was 10 Hounsfield units), the total area under the curve above the lower threshold, and the area within the optimal range. Results: A. Animal Experiment: At a given concentration and injection rate, an increased volume of contrast medium led to increases in Imax A, P and L. In addition, Tmax A, P, L and Teq were prolonged in parallel with increases in injection time The time-attenuation curve shifted upward and to the right. For a given volume and injection rate, an increased concentration of contrast medium increased the degree of aortic, portal and hepatic enhancement, though Tmax A, P and L remained the same. The time-attenuation curve shifted upward. For a given volume and concentration of contrast medium, changes in the injection rate had a prominent effect on aortic enhancement, and that of the portal vein and hepatic parenchyma also showed some increase, though the effect was less prominent. A increased in the rate of contrast injection led to shifting of the time enhancement curve to the left and upward. B. Computer Simulation: At a faster injection rate, there was minimal change in the degree of hepatic attenuation, though the duration of the optimal temporal window decreased. The area between 10 and 30 HU was greatest when contrast media was delivered at a rate of 2 3 mL/sec. Although the total area under the curve increased in proportion to the injection rate, most of this increase was above the upper threshould and thus the temporal window was narrow and the optimal area decreased. Conclusion: Increases in volume, concentration and injection rate all resulted in improved arterial enhancement. If cost was disregarded, increasing the injection volume was the most reliable way of obtaining good quality enhancement. The optimal way of delivering a given amount of contrast medium can be calculated using a computer-based mathematical model.

  • PDF

Treatment Planning for Minimizing Carotid Artery Dose in the Radiotherapy of Early Glottic Cancer (조기 성문암의 방사선치료에서 경동맥을 보호하기 위한 치료 계획)

  • Ki, Yang-Kan;Kim, Won-Taek;Nam, Ji- Ho;Kim, Dong-Hyun;Lee, Ju-Hye;Park, Dal;Kim, Don-Won
    • Radiation Oncology Journal
    • /
    • v.29 no.2
    • /
    • pp.115-120
    • /
    • 2011
  • Purpose: To examine the feasibility of the treatment planning for minimizing carotid artery dose in the radiotherapy of early glottic cancer. Materials and Methods: From 2007 to 2010, computed tomography simulation images of 31 patients treated by radiotherapy for early glottic cancer were analyzed. The virtual planning was used to compare the parallel-opposing fields (POF) with the modified oblique fields (MOF) placed at angles to exclude the ipsilateral carotid arteries. Planning target volume (PTV), irradiated volume, carotid artery, and spinal cord were analyzed at a mean dose, $V_{35}$, $V_{40}$, $V_{50}$ and with a percent dose-volume. Results: The beam angles were arranged 25 degrees anteriorly in 23 patients and 30 degrees anteriorly in 8 dose-volume of carotid artery shows the significant difference (p<0.001). The mean doses of carotid artery were 38.5 Gy for POF and 26.3 Gy for MOF and the difference was statistically significant (p=0.012). Similarly, $V_{35}$, $V_{40}$, and $V_{50}$ also showed significant differences between POF and MOF. Conclusion: The modified oblique field was respected to prevent a carotid artery stenosis and reduce the incidence of a stroke based on these results.

An Efficient Array Algorithm for VLSI Implementation of Vector-radix 2-D Fast Discrete Cosine Transform (Vector-radix 2차원 고속 DCT의 VLSI 구현을 위한 효율적인 어레이 알고리듬)

  • 신경욱;전흥우;강용섬
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.12
    • /
    • pp.1970-1982
    • /
    • 1993
  • This paper describes an efficient array algorithm for parallel computation of vector-radix two-dimensional (2-D) fast discrete cosine transform (VR-FCT), and its VLSI implementation. By mapping the 2-D VR-FCT onto a 2-D array of processing elements (PEs), the butterfly structure of the VR-FCT can be efficiently importanted with high concurrency and local communication geometry. The proposed array algorithm features architectural modularity, regularity and locality, so that it is very suitable for VLSI realization. Also, no transposition memory is required, which is invitable in the conventional row-column decomposition approach. It has the time complexity of O(N+Nnzp-log2N) for (N*N) 2-D DCT, where Nnzd is the number of non-zero digits in canonic-signed digit(CSD) code, By adopting the CSD arithmetic in circuit desine, the number of addition is reduced by about 30%, as compared to the 2`s complement arithmetic. The computational accuracy analysis for finite wordlength processing is presented. From simulation result, it is estimated that (8*8) 2-D DCT (with Nnzp=4) can be computed in about 0.88 sec at 50 MHz clock frequency, resulting in the throughput rate of about 72 Mega pixels per second.

  • PDF

Cache Performance Analysis of Multiprocessor Systems for OLTP Applications based on a Memory-Resident DBMS (메모리 상주 DBMS 기반의 OLTP 응용을 위한 다중프로세서 시스템 캐쉬 성능 분석)

  • Chung, Yong-Wha;Hahn, Woo-Jong;Yoon, Suk-Han;Park, Jin-Won;Lee, Kang-Woo;Kim, Yang-Woo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.4
    • /
    • pp.383-392
    • /
    • 2000
  • Currently, multiprocessors are evaluated almost exclusively with scientific applications. Commercial applications are rarely explored because it is difficult to obtain the source codes of commercial DBMS. Even when the source code is available, such as for POSTGRES, understanding the source code enough to perform detailed meaningful performance evaluations is a daunting task for computer architects.To evaluate multiprocessors with commercial applications, we have developed our own DBMS, called EZDB. EZDB is a parallelized DBMS, loosely inspired from POSTGRES, and running on top of a software architecture simulator. It is capable of executing parallel programs written in SQL. Contrary to POSTGRES, EZDB is not intended as a prototype for a production-quality DBMS. Its purpose is to easily run and evaluate the performance of commercial applications on multiprocessor architectures. To illustrate the usefulness of EZDB, we showed the cache performance data collected for the TPC-B benchmark on a shared-memory multiprocessor. The simulation results showed that the data structures exhibited unique sharing characteristics and that their locality properties and working sets were very different from those in scientific applications.

  • PDF

The Arch Type PV System Performance Evaluation of Multi Controlled Inverter for Improve the Efficiency (효율개선을 위한 다중제어 인버터방식의 아치형 PV System 성능 분석)

  • Lee, Mi-Yong;Park, Jeong-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.11
    • /
    • pp.5452-5457
    • /
    • 2012
  • It is saving material cost and construction cost by replacing conventional building materials, and It has advantages for aesthetic value. In the Europe, the United States, Japan and other country research about BIPV is actively being carried out and marketability is also being infinity expanding. Arch type PV systems efficiency characteristics is different depending on PV array's directly connection, parallel connection and arches angle, but is a lack of analysis on this nowadays. When the arch type PV system design up, they consider about aesthetic value and they didn't consider about generation efficiency. In this paper, we try to improve the efficiency through optimization of arch type PV system and estimation of the efficiency parameters of the arch type PV system, such as latitude, longitude, temperature, insolation, arch angle and each kind loss from system organization. For improving Arched PV system efficiency, proposed multiple control inverter system, and using simulation tool of Arched PV system "Solar pro", flat-plate type and many arch type PV system configuration the driving characteristics were compared and analyzed.

Implementation of WLAN Baseband Processor Based on Space-Frequency OFDM Transmit Diversity Scheme (공간-주파수 OFDM 전송 다이버시티 기법 기반 무선 LAN 기저대역 프로세서의 구현)

  • Jung Yunho;Noh Seungpyo;Yoon Hongil;Kim Jaeseok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.5 s.335
    • /
    • pp.55-62
    • /
    • 2005
  • In this paper, we propose an efficient symbol detection algorithm for space-frequency OFDM (SF-OFDM) transmit diversity scheme and present the implementation results of the SF-OFDM WLAN baseband processor with the proposed algorithm. When the number of sub-carriers in SF-OFDM scheme is small, the interference between adjacent sub-carriers may be generated. The proposed algorithm eliminates this interference in a parallel manner and obtains a considerable performance improvement over the conventional detection algorithm. The bit error rate (BER) performance of the proposed detection algorithm is evaluated by the simulation. In the case of 2 transmit and 2 receive antennas, at $BER=10^{-4}$ the proposed algorithm obtains about 3 dB gain over the conventional detection algorithm. The packet error rate (PER), link throughput, and coverage performance of the SF-OFDM WLAN with the proposed detection algorithm are also estimated. For the target throughput at $80\%$ of the peak data rate, the SF-OFDM WLAN achieves the average SNR gain of about 5.95 dB and the average coverage gain of 3.98 meter. The SF-OFDM WLAN baseband processor with the proposed algorithm was designed in a hardware description language and synthesized to gate-level circuits using 0.18um 1.8V CMOS standard cell library. With the division-free architecture, the total logic gate count for the processor is 945K. The real-time operation is verified and evaluated using a FPGA test system.

A Dynamical Load Balancing Method for Data Streaming and User Request in WebRTC Environment (WebRTC 환경에 데이터 스트리밍 및 사용자 요청에 따른 동적로드 밸런싱 방법)

  • Ma, Linh Van;Park, Sanghyun;Jang, Jong-hyun;Park, Jaehyung;Kim, Jinsul
    • Journal of Digital Contents Society
    • /
    • v.17 no.6
    • /
    • pp.581-592
    • /
    • 2016
  • WebRTC has quickly grown to be the world's advanced real-time communication in several platforms such as web and mobile. In spite of the advantage, the current technology in WebRTC does not handle a big-streaming efficiently between peers and a large amount request of users on the Signaling server. Therefore, in this paper, we put our work to handle the problem by delivering the flow of data with dynamical load balancing algorithms. We analyze the request source users and direct those streaming requests to a load balancing component. More specifically, the component determines an amount of the requested resource and available resource on the response server, then it delivers streaming data to the requesting user parallel or alternately. To show how the method works, we firstly demonstrate the load-balancing algorithm by using a network simulation tool OPNET, then, we seek to implement the method into an Ubuntu server. In addition, we compare the result of our work and the original implementation of WebRTC, it shows that the method performs efficiently and dynamically than the origin.

Analysis on the Active/Inactive Status of Computational Resources for Improving the Performance of the GPU (GPU 성능 저하 해결을 위한 내부 자원 활용/비활용 상태 분석)

  • Choi, Hongjun;Son, Dongoh;Kim, Jongmyon;Kim, Cheolhong
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.7
    • /
    • pp.1-11
    • /
    • 2015
  • In recent high performance computing system, GPGPU has been widely used to process general-purpose applications as well as graphics applications, since GPU can provide optimized computational resources for massive parallel processing. Unfortunately, GPGPU doesn't exploit computational resources on GPU in executing general-purpose applications fully, because the applications cannot be optimized to GPU architecture. Therefore, we provide GPU research guideline to improve the performance of computing systems using GPGPU. To accomplish this, we analyze the negative factors on GPU performance. In this paper, in order to clearly classify the cause of the negative factors on GPU performance, GPU core status are defined into 5 status: fully active status, partial active status, idle status, memory stall status and GPU core stall status. All status except fully active status cause performance degradation. We evaluate the ratio of each GPU core status depending on the characteristics of benchmarks to find specific reasons which degrade the performance of GPU. According to our simulation results, partial active status, idle status, memory stall status and GPU core stall status are induced by computational resource underutilization problem, low parallelism, high memory requests, and structural hazard, respectively.