• Title/Summary/Keyword: Output unit

Search Result 965, Processing Time 0.027 seconds

A Study of Improvements in the Standards of Cost Estimate for the New Excellent Technology in Construction (건설 신기술의 원가산정기준 개선방안에 대한 연구)

  • Lee, Ju-hyun;Tae, Yong-Ho;Baek, Seung-Ho;Kim, Kyoungmin
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.5
    • /
    • pp.65-76
    • /
    • 2022
  • The New Excellent Technology (NET) designation system, introduced in 1989 for the purpose of promoting the development of domestic construction technology and enhancing national competitiveness, reviews the statement of construction cost of new technologies. And the cost reduction effect such as design, construction, and maintenance cost and the effect of reducing the construction duration are evaluated as an evaluation criteria of economic feasibility. However, in this evaluation process, differences of opinion between the institution of construction cost estimating standard management and the new technology developer about unique technologies frequently occur. In addition it is difficult to objectively compare the construction duration with existing similar technologies because there is no information on productivity as the current cost estimating standards for new technologies only present the required amount per unit quantity. In this study, the current state of cost estimating criteria review procedure, evaluation criteria, and cost estimating standards establishment method were analyzed when screening for the designation of a new construction technologies, and compared with overseas cost estimating standards, measures to improve the cost estimating standards of current construction new technologies were suggested. Through the improved cost estimating standards of this study, it is expected that cost information on new technologies will be provided to clients in more detail than the current ones, and the availability and applicability of new construction technologies would be improved by simplifying the construction cost calculation process more.

Frequency Stability Enhancement of Power System using BESS (BESS를 활용한 전력계통 주파수 안정도 향상)

  • Yoo, Seong-Soo;Kwak, Eun-Sup;Moon, Chae-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.595-606
    • /
    • 2022
  • Korea has the characteristics of traditional power system such as large-scale power generation and large-scale power transmission systems, including 20 GW large-scale power generation complexes in several regions with unit generator capacity exceeding 1.4 GW, 2-3 ultra-high-voltage transmission lines that transport power from large-scale power generation complexes, and 6 ultra-high-voltage transmission lines that transport power from non-metropolitan areas to the metropolitan area. Due to the characteristics of the power system, the penetration level for renewable energy is low, but due to frequency stability issue, some generators are reducing the output of generators. In the future, the issue of maintaining the stability of the power system is expected to emerge as the most important issue in accordance with the policy of expanding renewable energy. When non-inertial inverter-based renewable energy, such as solar and wind power, surges rapidly, the means to improve the power system stability in an independent system is to install a natural inertial resource synchronous condenser (SC) and a virtual inertial resource BESS in the system. In this study, we analyzed the effect of renewable energy on power system stability and the BESS effect to maintain the minimum frequency through a power system simulation. It was confirmed that the BESS effect according to the power generation constraint capacity reached a maximum of 122.81 %.

A Study on the Educational Gap between Regions according to the Manpower Allocation under the 「School Library Promotion Act」 (「학교도서관진흥법」 규정 인력 배치에 따른 지역 간 교육격차에 관한 연구)

  • Bong-Suk Kang
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.57 no.3
    • /
    • pp.231-248
    • /
    • 2023
  • The purpose of this study is to trigger a discussion on the educational gap between regions in school library resources. To this end, differences and correlations between other resources invested in the school library and output results were analyzed according to manpower allocation. There was a positive correlation between the number of books, the budget, the number of seats, the number of borrowed materials, and the number of students. It was analyzed that manpower allocation had a negative correlation with the number of subjects in which the ratio of students, the lowest grade in the achievement evaluation, was more than 1/2. As a result of examining the staffing according to the 「School Library Promotion Act」 by regional characteristics, it was found that the allocation rate was statistically significantly higher in the order of metropolitan area, and provincial unit. Depending on the regional characteristics, there were differences in net asset per household as well as differences in school library manpower assignment rates. In contrast, the large cities with relatively affluent school library manpower assignment rates were found to be higher. Therefore, based on the survey contents of this study, it was emphasized that the manpower stipulated in the 「School Library Promotion Act」 should be deployed as soon as possible even in relatively poor areas to bridge the educational gap between regions.

An Empirical Analysis on the Efficiency of the Projects for Strengthening the Service Business Competitiveness (서비스기업경쟁력강화사업의 효율성에 대한 실증 분석)

  • Kim, Dae Ho;Kim, Dongwook
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.6 no.5
    • /
    • pp.367-377
    • /
    • 2016
  • The purpose of the projects for strengthening the Service Business Competitiveness, which had been sponsored by the Ministry of Trade, Industry and Energy, and managed by the NIPA, is to support for combining the whole business process of the SMEs with the business model considering the scientific aspects of the services, to enhance the productivity of them and to add the values of their activities. 5 organizations are selected in 2014, and 4 in 2015 as leading organizations for these projects. This study analyzed the efficiency of these projects using DEA. Throughout the analysis of the prior researches, this study used the amount of government-sponsored money as the input variable, and the number of new customer business, the sales revenue, and the number of new employment as the output variables. And the result of this analysis showed that the decision making unit 12, 15, and 21 was efficient. And from this study, we found out two more performance indicators such as, the number of new employment and the amount of sales revenue, besides the number of new customer businesses.

Predicting blast-induced ground vibrations at limestone quarry from artificial neural network optimized by randomized and grid search cross-validation, and comparative analyses with blast vibration predictor models

  • Salman Ihsan;Shahab Saqib;Hafiz Muhammad Awais Rashid;Fawad S. Niazi;Mohsin Usman Qureshi
    • Geomechanics and Engineering
    • /
    • v.35 no.2
    • /
    • pp.121-133
    • /
    • 2023
  • The demand for cement and limestone crushed materials has increased many folds due to the tremendous increase in construction activities in Pakistan during the past few decades. The number of cement production industries has increased correspondingly, and so the rock-blasting operations at the limestone quarry sites. However, the safety procedures warranted at these sites for the blast-induced ground vibrations (BIGV) have not been adequately developed and/or implemented. Proper prediction and monitoring of BIGV are necessary to ensure the safety of structures in the vicinity of these quarry sites. In this paper, an attempt has been made to predict BIGV using artificial neural network (ANN) at three selected limestone quarries of Pakistan. The ANN has been developed in Python using Keras with sequential model and dense layers. The hyper parameters and neurons in each of the activation layers has been optimized using randomized and grid search method. The input parameters for the model include distance, a maximum charge per delay (MCPD), depth of hole, burden, spacing, and number of blast holes, whereas, peak particle velocity (PPV) is taken as the only output parameter. A total of 110 blast vibrations datasets were recorded from three different limestone quarries. The dataset has been divided into 85% for neural network training, and 15% for testing of the network. A five-layer ANN is trained with Rectified Linear Unit (ReLU) activation function, Adam optimization algorithm with a learning rate of 0.001, and batch size of 32 with the topology of 6-32-32-256-1. The blast datasets were utilized to compare the performance of ANN, multivariate regression analysis (MVRA), and empirical predictors. The performance was evaluated using the coefficient of determination (R2), mean absolute error (MAE), mean squared error (MSE), mean absolute percentage error (MAPE), and root mean squared error (RMSE)for predicted and measured PPV. To determine the relative influence of each parameter on the PPV, sensitivity analyses were performed for all input parameters. The analyses reveal that ANN performs superior than MVRA and other empirical predictors, andthat83% PPV is affected by distance and MCPD while hole depth, number of blast holes, burden and spacing contribute for the remaining 17%. This research provides valuable insights into improving safety measures and ensuring the structural integrity of buildings near limestone quarry sites.

THE CURRENT STATUS OF BIOMEDICAL ENGINEERING IN THE USA

  • Webster, John G.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1992 no.05
    • /
    • pp.27-47
    • /
    • 1992
  • Engineers have developed new instruments that aid in diagnosis and therapy Ultrasonic imaging has provided a nondamaging method of imaging internal organs. A complex transducer emits ultrasonic waves at many angles and reconstructs a map of internal anatomy and also velocities of blood in vessels. Fast computed tomography permits reconstruction of the 3-dimensional anatomy and perfusion of the heart at 20-Hz rates. Positron emission tomography uses certain isotopes that produce positrons that react with electrons to simultaneously emit two gamma rays in opposite directions. It locates the region of origin by using a ring of discrete scintillation detectors, each in electronic coincidence with an opposing detector. In magnetic resonance imaging, the patient is placed in a very strong magnetic field. The precessing of the hydrogen atoms is perturbed by an interrogating field to yield two-dimensional images of soft tissue having exceptional clarity. As an alternative to radiology image processing, film archiving, and retrieval, picture archiving and communication systems (PACS) are being implemented. Images from computed radiography, magnetic resonance imaging (MRI), nuclear medicine, and ultrasound are digitized, transmitted, and stored in computers for retrieval at distributed work stations. In electrical impedance tomography, electrodes are placed around the thorax. 50-kHz current is injected between two electrodes and voltages are measured on all other electrodes. A computer processes the data to yield an image of the resistivity of a 2-dimensional slice of the thorax. During fetal monitoring, a corkscrew electrode is screwed into the fetal scalp to measure the fetal electrocardiogram. Correlations with uterine contractions yield information on the status of the fetus during delivery To measure cardiac output by thermodilution, cold saline is injected into the right atrium. A thermistor in the right pulmonary artery yields temperature measurements, from which we can calculate cardiac output. In impedance cardiography, we measure the changes in electrical impedance as the heart ejects blood into the arteries. Motion artifacts are large, so signal averaging is useful during monitoring. An intraarterial blood gas monitoring system permits monitoring in real time. Light is sent down optical fibers inserted into the radial artery, where it is absorbed by dyes, which reemit the light at a different wavelength. The emitted light travels up optical fibers where an external instrument determines O2, CO2, and pH. Therapeutic devices include the electrosurgical unit. A high-frequency electric arc is drawn between the knife and the tissue. The arc cuts and the heat coagulates, thus preventing blood loss. Hyperthermia has demonstrated antitumor effects in patients in whom all conventional modes of therapy have failed. Methods of raising tumor temperature include focused ultrasound, radio-frequency power through needles, or microwaves. When the heart stops pumping, we use the defibrillator to restore normal pumping. A brief, high-current pulse through the heart synchronizes all cardiac fibers to restore normal rhythm. When the cardiac rhythm is too slow, we implant the cardiac pacemaker. An electrode within the heart stimulates the cardiac muscle to contract at the normal rate. When the cardiac valves are narrowed or leak, we implant an artificial valve. Silicone rubber and Teflon are used for biocompatibility. Artificial hearts powered by pneumatic hoses have been implanted in humans. However, the quality of life gradually degrades, and death ensues. When kidney stones develop, lithotripsy is used. A spark creates a pressure wave, which is focused on the stone and fragments it. The pieces pass out normally. When kidneys fail, the blood is cleansed during hemodialysis. Urea passes through a porous membrane to a dialysate bath to lower its concentration in the blood. The blind are able to read by scanning the Optacon with their fingertips. A camera scans letters and converts them to an array of vibrating pins. The deaf are able to hear using a cochlear implant. A microphone detects sound and divides it into frequency bands. 22 electrodes within the cochlea stimulate the acoustic the acoustic nerve to provide sound patterns. For those who have lost muscle function in the limbs, researchers are implanting electrodes to stimulate the muscle. Sensors in the legs and arms feed back signals to a computer that coordinates the stimulators to provide limb motion. For those with high spinal cord injury, a puff and sip switch can control a computer and permit the disabled person operate the computer and communicate with the outside world.

  • PDF

Study on the Physical Properties of the Gamma Beam-Irradiated Teflon-FEP and PET Film (Teflon-FEP 와 PET Film 의 감마선 조사에 따른 물리적 특성에 관한 연구)

  • 김성훈;김영진;이명자;전하정;이병용
    • Progress in Medical Physics
    • /
    • v.9 no.1
    • /
    • pp.11-21
    • /
    • 1998
  • Circular metal electrodes were vacuum-deposited with chromium on the both sides of Teflon-FEP and PET film characteristic of electret and the physical properties of the two polymers were observed during an irradiation by gamma-ray from $\^$60/Co. With the onset of irradiation of output 25.0 cGy/min the induced current increased rapidly for 2 sec, reached a maximum, and subsequently decreased. A steady-state induced current was reached about in 60 second. The dielectric constant and conductivity of Teflon-FEP were changed from 2.15 to 18.0 and from l${\times}$l0$\^$-17/ to 1.57${\times}$10$\^$-13/ $\Omega$-$\^$-1/cm$\^$-1/, respectively. For PET the dielectric constant was changed from 3 to 18.3 and the conductivity from 10$\^$-17/ to 1.65${\times}$10$\^$-13/ $\Omega$-$\^$-1/cm$\^$-1/. The increase of the radiation-induced steady state current I$\^$c/, permittivity $\varepsilon$ and conductivity $\sigma$ with output(4.0 cGy/min, 8.5 cGy/min, 15.6 cGy/min, 19.3 cGy/min) was observed. A series of independent measurements were also performed to evaluate reproducibility and revealed less than 1% deviation in a day and 3% deviation in a long term. Charge and current showed the dependence on the interval between measurements, the smaller the interval was, the bigger the difference between initial reading and next reading was. At least in 20 minutes of next reading reached an initial value. It may indicate that the polymers were exhibiting an electret state for a while. These results can be explained by the internal polarization associated with the production of electron-hole pairs by secondary electrons, the change of conductivity and the equilibrium due to recombination etc. Heating to the sample made the reading value increase in a short time, it may be interpreted that the internal polarization was released due to heating and it contributed the number of charge carriers to increase when the samples was again irradiated. The linearity and reproducibility of the samples with the applied voltage and absorbed dose and a large amount of charge measured per unit volume compared with the other chambers give the feasibility of a radiation detector and make it possible to reduce the volume of a detector.

  • PDF

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

Vasopressin in Young Patients with Congenital Heart Defects for Postoperative Vasodilatory Shock (선천성 심장병 수술 후 발생한 혈관확장성 쇼크에 대한 바소프레신의 치료)

  • 황여주;안영찬;전양빈;이재웅;박철현;박국양;한미영;이창하
    • Journal of Chest Surgery
    • /
    • v.37 no.6
    • /
    • pp.504-510
    • /
    • 2004
  • Background: Vasodilatory shock after cardiac surgery may result from the vasopressin deficiency following cardio-pulmonary bypass and sepsis, which did not respond to usual intravenous inotropes. In contrast to the adult patients, the effectiveness of vasopressin for vasodilatory shock in children has not been known well and so we reviewed our experience of vasopressin therapy in the small babies with a cardiac disease. Material and Method: Between February and August 2003, intravenous vasopressin was administrated in 6 patients for vasodilatory shock despite being supported on intravenous inotropes after cardiac surgery. Median age at operation was 25 days old (ranges; 2∼41 days) and median body weight was 2,870 grams (ranges; 900∼3,530 grams). Preoperative diag-noses were complete transposition of the great arteries in 2 patients, hypoplastic left heart syndrome in 1, Fallot type double-outlet right ventricle in 1, aortic coarctation with severe atrioventricular valve regurgitation in 1, and total anomalous pulmonary venous return in 1. Total repair and palliative repair were undertaken in each 3 patient. Result: Most patients showed vasodilatory shock not responding to the inotropes and required the vasopressin therapy within 24 hours after cardiac surgery and its readministration for septic shock. The dosing range for vasopressin was 0.0002∼0.008 unit/kg/minute with a median total time of its administration of 59 hours (ranges; 26∼140 hours). Systolic blood pressure before, 1 hour, and 6 hours after its administration were 42.7$\pm$7.4 mmHg, 53.7$\pm$11.4 mmHg, and 56.3$\pm$13.4 mmHg, respectively, which shows a significant increase in systolic blood pressure (systolic pressure 1hour and 6 hours after the administration compared to before the administration; p=0.042 in all). Inotropic indexes before, 6 hour, and 12 hours after its administration were 32.3$\pm$7.2, 21.0$\pm$8.4, and 21.2$\pm$8.9, respectively, which reveals a significant decrease in inotropic index (inotropic indexes 6 hour and 12 hours after the administration compared to before the administration; p=0.027 in all). Significant metabolic acidosis and decreased urine output related to systemic hypoperfusion were not found after vasopressin admin- istration. Conclusion: In young children suffering from vasodilatory shock not responding to common inotropes despite normal ventricular contractility, intravenous vasopressin reveals to be an effective vasoconstrictor to increase systolic blood pressure and to mitigate the complications related to higher doses of inotropes.

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF