• Title/Summary/Keyword: Engineers

Search Result 192,293, Processing Time 0.157 seconds

An Economic Analysis of the Effluent Heat Supply from Thermal Power Plant to the Farm Facility House (화력발전소 온배수열 활용 시설하우스 열공급 모형 경제성분석 연구)

  • Um, Byung Hwan;Ahn, Cha Su
    • Korean Chemical Engineering Research
    • /
    • v.56 no.1
    • /
    • pp.6-13
    • /
    • 2018
  • Utilizing the heat of cooling water discharge of coal-fired power plant, pipeline investment costs for businesses that supply heat to agricultural facilities near power plants increase in proportion to installation distance. On one hand, the distance from the power plant is a factor that brings difficulties to secure economic efficiency. On the other, if the installation distance is short, there is a problem of securing the heating demands, facility houses, which causes economical efficiency to suffer. In this study, the economic efficiency of 1km length of standard heat pipeline was evaluated. The sensitivity of the heat pipe to the new length variation was analyzed at the level of government subsidy, amount of heating demand and the incremental rate of pipeline with additional government subsidy. As a result of the analysis, it was estimated that NPV 131 million won and IRR 15.73%. The sensitivity analysis showed that NPV was negative when the length of heat pipe facility exceeded 2.6 km. If the government supports 50% of the initial investment, the efficiency is secured within the estimated length of 5.3 km, and if it supports 80%, the length increases within 11.4 km. If the heat demand is reduced to less than 62% at the new length of the standard heat pipe, it is expected economic efficiency is not obtained. If the ratio of government subsidies to initial investment increases, the elasticity of the new bloc will increase, and the fixed investment, which is the cost of capital investment for one unit of heating demand, will decrease. This would result in a reduction in the cost of production per unit, and it would be possible to supply heat at a cheaper price level to the facility farming. Government subsidies will result in the increased economic availability of hot plumbing facilities and additional efficiencies due to increased demand. The greater government subsidies to initial investment, the less farms cost due to the decrease in the price per unit. The results of the study are significant in terms of the economic evaluation of the effectiveness of the government subsidy for the thermal power plant heat utilization project. The implication can be applied to any related pilot to come.

The Development of an Electroconductive SiC-ZrB2 Composite through Spark Plasma Sintering under Argon Atmosphere

  • Lee, Jung-Hoon;Ju, Jin-Young;Kim, Cheol-Ho;Park, Jin-Hyoung;Lee, Hee-Seung;Shin, Yong-Deok
    • Journal of Electrical Engineering and Technology
    • /
    • v.5 no.2
    • /
    • pp.342-351
    • /
    • 2010
  • The SiC-$ZrB_2$ composites were fabricated by combining 30, 35, 40, 45 and 50 vol. % of zirconium diboride ($ZrB_2$) powders with silicon carbide (SiC) matrix. The SiC-$ZrB_2$ composites and the sintered compacts were produced through spark plasma sintering (SPS) under argon atmosphere, and its physical, electrical, and mechanical properties were examined. Also, the thermal image analysis of the SiC-$ZrB_2$ composites was examined. Reactions between $\beta$-SiC and $ZrB_2$ were not observed via x-ray diffraction (XRD) analysis. The apparent porosity of the SiC+30vol.%$ZrB_2$, SiC+35vol.%$ZrB_2$, SiC+40vol.%$ZrB_2$, SiC+45vol.%$ZrB_2$ and SiC+50vol.%$ZrB_2$ composites were 7.2546, 0.8920, 0.6038, 1.0981, and 10.0108%, respectively. The XRD phase analysis of the sintered compacts demonstrated a high phase of SiC and $ZrB_2$. Among the $SiC+ZrB_2$ composites, the SiC+50vol.%$ZrB_2$ composite had the lowest flexural strength, 290.54MPa, the other composites had more than 980MPa flexural strength except the SiC+30vol.%$ZrB_2$ composite; the SiC+40vol.%$ZrB_2$ composite had the highest flexural strength, 1011.34MPa, at room temperature. The electrical properties of the SiC-$ZrB_2$ composites had positive temperature coefficient resistance (PTCR). The V-I characteristics of the SiC-$ZrB_2$ composites had a linear shape in the temperature range from room to $500^{\circ}C$. The electrical resistivities of the SiC+30vol.%$ZrB_2$, SiC+35vol.%$ZrB_2$, SiC+40vol.%$ZrB_2$ SiC+45vol.%$ZrB_2$ and SiC+50vol.%$ZrB_2$ composites were $4.573\times10^{-3}$, $1.554\times10^{-3}$, $9.365\times10^{-4}$, $6.999\times10^{-4}$, and $6.069\times10^{-4}\Omega{\cdot}cm$, respectively, at room temperature, and their resistance temperature coefficients were $1.896\times10^{-3}$, $3.064\times10^{-3}$, $3.169\times10^{-3}$, $3.097\times10^{-3}$, and $3.418\times10^{-3}/^{\circ}C$ in the temperature range from room to $500^{\circ}C$, respectively. Therefore, it is considered that among the sintered compacts the SiC+35vol.%$ZrB_2$, SiC+40vol.%$ZrB_2$ and SiC+45vol.%$ZrB_2$ composites containing the most outstanding mechanical properties as well as PTCR and V-I characteristics can be used as an energy friendly ceramic heater or ohmic-contact electrode material through SPS.

Automatic Text Extraction from News Video using Morphology and Text Shape (형태학과 문자의 모양을 이용한 뉴스 비디오에서의 자동 문자 추출)

  • Jang, In-Young;Ko, Byoung-Chul;Kim, Kil-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.479-488
    • /
    • 2002
  • In recent years the amount of digital video used has risen dramatically to keep pace with the increasing use of the Internet and consequently an automated method is needed for indexing digital video databases. Textual information, both superimposed and embedded scene texts, appearing in a digital video can be a crucial clue for helping the video indexing. In this paper, a new method is presented to extract both superimposed and embedded scene texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, a color image is converted into a gray-level image and applies contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose+CloseOpen)/2 morphological operations, maintaining text components using (OpenClose+CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, my method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.

A Study of Evaluation Index Development of Healthcare Rehabilitation Device Design (헬스케어 재활훈련기 디자인 평가 요소 개발에 관한 연구)

  • Cho, Jae Sang;Kwon, Tae Kyu;Hong, Jung Pyo
    • Science of Emotion and Sensibility
    • /
    • v.17 no.3
    • /
    • pp.129-142
    • /
    • 2014
  • Due to the increase of the aged population and population of the disabled today, there is a growing demand for rehabilitation medical instruments. Furthermore, there is a growing demand for evaluation indices for services that should be provided for uses of the rehabilitation medical instruments. In order to evaluate rehabilitation medical instrument designs in this study, the basic index for design evaluations shall be identified to search for assessment plans. Through this, new evaluation indices will be deduced through discussions and analysis of rehabilitation medical experts, biomedical engineers, and designers. The results of this study are summarized as follows. First, the existing design evaluation indices were collected and analyzed to construct 10 rehabilitation medical instrument design evaluation indices and 44 sub-evaluation items. These will be important evaluation standards for designing rehabilitation medical instruments in the future. Second, the design evaluation indices that must be taken into consideration when developing health care rehabilitation medical instruments are the 10 design evaluation indices of usability, cognition, safety, learning, motility, durability, economic feasibility, space, aesthetics and environmental aspects. Third, design evaluation indices of environment, space, cognition, usability, economic feasibility and aesthetics are indices that must be taken into consideration for product design, while learning, safety, motility and durability are factors that must be given special consideration for rehabilitation medical instrument design evaluation indices. Fourth, if existing product design evaluation indices placed importance on environment, space, cognition, usability, economic feasibility and aesthetics of products for design evaluation indices, rehabilitation medical instrument design evaluation indices placed importance on learning, safety, motility and durability on top of usability and economic feasibility, which are the differences between the design evaluation indices of rehabilitation medical instrument and other product designs. The 10 rehabilitation medical device design evaluation indices and 44 sub-evaluation items were carried out in this study. This research is only on the overall rehabilitation medical device design evaluation indices. In future research, the evaluation indices will be applied in the actual rehabilitation medical design device through production of prototypes, while making revisions and supplementations where necessary.

Utility-Based Video Adaptation in MPEG-21 for Universal Multimedia Access (UMA를 위한 유틸리티 기반 MPEG-21 비디오 적응)

  • 김재곤;김형명;강경옥;김진웅
    • Journal of Broadcast Engineering
    • /
    • v.8 no.4
    • /
    • pp.325-338
    • /
    • 2003
  • Video adaptation in response to dynamic resource conditions and user preferences is required as a key technology to enable universal multimedia access (UMA) through heterogeneous networks by a multitude of devices In a seamless way. Although many adaptation techniques exist, selections of appropriate adaptations among multiple choices that would satisfy given constraints are often ad hoc. To provide a systematic solution, we present a general conceptual framework to model video entity, adaptation, resource, utility, and relations among them. It allows for formulation of various adaptation problems as resource-constrained utility maximization. We apply the framework to a practical case of dynamic bit rate adaptation of MPEG-4 video streams by employing combination of frame dropping and DCT coefficient dropping. Furthermore, we present a descriptor, which has been accepted as a part of MPEG-21 Digital Item Adaptation (DIA), for supporting terminal and network quality of service (QoS) in an interoperable manner. Experiments are presented to demonstrate the feasibility of the presented framework using the descriptor.

Research for Application of Interactive Data Broadcasting Service in DMB (DMB에서의 양방향 데어터방송 서비스도입에 관한 연구)

  • Kim, Jong-Geun;Choe, Seong-Jin;Lee, Seon-Hui
    • Broadcasting and Media Magazine
    • /
    • v.11 no.4
    • /
    • pp.104-117
    • /
    • 2006
  • In this Paper, we analyze the application of Interactive Data Broadcasting in DMB(Digital Multimedia Broadcasting) in the accordance with convergence of service and technology. With the acceleration of digital convergence in the Ubiquitous period substantial development of digital media technology and convergence of broadcasting and telecommunication industry are being witnessed. Consequently these results gave rise to newly combined-products such as DMB(Digital Multimedia Broadcasting), WCDMA(Wide-band code division multiple access), Wibro(Wireless Broadband Internet), IP-TV (Internet protocol TV) and HSDPA(High speed downlink packet access). The preparatory stage for the implementation of Interactive Data Broadcasting Service will be reached by the end of December, 2006. DMB is the first result of a successful convergence service between Broadcasting and Telecommunication in new media era. Multimedia technology and services are the core elements of DMB. The Data Broadcasting will not only offer various services of interactive information such News, Weather, Broadcasting Program etc, but also be linked with characteristic function of mobile phone such as calling and SMS(Short Message Service) via Return Channel.

A Fast 4X4 Intra Prediction Method using Motion Vector Information and Statistical Mode Correlation between 16X16 and 4X4 Intra Prediction In H.264|MPEG-4 AVC (H.264|MPEG-4 AVC 비디오 부호화에서 움직임 벡터 정보와 16~16 및 4X4 화면 내 예측 최종 모드간 통계적 연관성을 이용한 화면 간 프레임에서의 4X4 화면 내 예측 고속화 방법)

  • Na, Tae-Young;Jung, Yun-Sik;Kim, Mun-Churl;Hahm, Sang-Jin;Park, Chang-Seob;Park, Keun-Soo
    • Journal of Broadcast Engineering
    • /
    • v.13 no.2
    • /
    • pp.200-213
    • /
    • 2008
  • H.264| MPEG-4 AVC is a new video codingstandard defined by JVT (Joint Video Team) which consists of ITU-T and ISO/IEC. Many techniques are adopted fur the compression efficiency: Especially, an intra prediction in an inter frame is one example but it leads to excessive amount of encoding time due to the decision of a candidate mode and a RDcost calculation. For this reason, a fast determination of the best intra prediction mode is the main issue for saving the encoding time. In this paper, by using the result of statistical relation between intra $16{\times}16$ and $4{\times}4$ intra predictions, the number of candidate modes for $4{\times}4$ intra prediction is reduced. Firstly, utilizing motion vector obtained after inter prediction, prediction of a block mode for each macroblock is made. If an intra prediction is needed, the correlation table between $16{\times}16$ and $4{\times}4$ intra predicted modes is created using the probability during each I frame-coding process. Secondly, using this result, the candidate modes for a $4{\times}4$ intra prediction that reaches a predefined specific probability value are only considered in the same GOP For the experiments, JM11.0, the reference software of H.264|MPEG-4 AVC is used and the experimental results show that the encoding time could be reduced by 51.24% in maximum with negligible amounts of PSNR drop and bitrate increase.

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

An Efficient Algorithm for Streaming Time-Series Matching that Supports Normalization Transform (정규화 변환을 지원하는 스트리밍 시계열 매칭 알고리즘)

  • Loh, Woong-Kee;Moon, Yang-Sae;Kim, Young-Kuk
    • Journal of KIISE:Databases
    • /
    • v.33 no.6
    • /
    • pp.600-619
    • /
    • 2006
  • According to recent technical advances on sensors and mobile devices, processing of data streams generated by the devices is becoming an important research issue. The data stream of real values obtained at continuous time points is called streaming time-series. Due to the unique features of streaming time-series that are different from those of traditional time-series, similarity matching problem on the streaming time-series should be solved in a new way. In this paper, we propose an efficient algorithm for streaming time- series matching problem that supports normalization transform. While the existing algorithms compare streaming time-series without any transform, the algorithm proposed in the paper compares them after they are normalization-transformed. The normalization transform is useful for finding time-series that have similar fluctuation trends even though they consist of distant element values. The major contributions of this paper are as follows. (1) By using a theorem presented in the context of subsequence matching that supports normalization transform[4], we propose a simple algorithm for solving the problem. (2) For improving search performance, we extend the simple algorithm to use $k\;({\geq}\;1)$ indexes. (3) For a given k, for achieving optimal search performance of the extended algorithm, we present an approximation method for choosing k window sizes to construct k indexes. (4) Based on the notion of continuity[8] on streaming time-series, we further extend our algorithm so that it can simultaneously obtain the search results for $m\;({\geq}\;1)$ time points from present $t_0$ to a time point $(t_0+m-1)$ in the near future by retrieving the index only once. (5) Through a series of experiments, we compare search performances of the algorithms proposed in this paper, and show their performance trends according to k and m values. To the best of our knowledge, since there has been no algorithm that solves the same problem presented in this paper, we compare search performances of our algorithms with the sequential scan algorithm. The experiment result showed that our algorithms outperformed the sequential scan algorithm by up to 13.2 times. The performances of our algorithms should be more improved, as k is increased.

The Performance Bottleneck of Subsequence Matching in Time-Series Databases: Observation, Solution, and Performance Evaluation (시계열 데이타베이스에서 서브시퀀스 매칭의 성능 병목 : 관찰, 해결 방안, 성능 평가)

  • 김상욱
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.381-396
    • /
    • 2003
  • Subsequence matching is an operation that finds subsequences whose changing patterns are similar to a given query sequence from time-series databases. This paper points out the performance bottleneck in subsequence matching, and then proposes an effective method that improves the performance of entire subsequence matching significantly by resolving the performance bottleneck. First, we analyze the disk access and CPU processing times required during the index searching and post processing steps through preliminary experiments. Based on their results, we show that the post processing step is the main performance bottleneck in subsequence matching, and them claim that its optimization is a crucial issue overlooked in previous approaches. In order to resolve the performance bottleneck, we propose a simple but quite effective method that processes the post processing step in the optimal way. By rearranging the order of candidate subsequences to be compared with a query sequence, our method completely eliminates the redundancy of disk accesses and CPU processing occurred in the post processing step. We formally prove that our method is optimal and also does not incur any false dismissal. We show the effectiveness of our method by extensive experiments. The results show that our method achieves significant speed-up in the post processing step 3.91 to 9.42 times when using a data set of real-world stock sequences and 4.97 to 5.61 times when using data sets of a large volume of synthetic sequences. Also, the results show that our method reduces the weight of the post processing step in entire subsequence matching from about 90% to less than 70%. This implies that our method successfully resolves th performance bottleneck in subsequence matching. As a result, our method provides excellent performance in entire subsequence matching. The experimental results reveal that it is 3.05 to 5.60 times faster when using a data set of real-world stock sequences and 3.68 to 4.21 times faster when using data sets of a large volume of synthetic sequences compared with the previous one.