• Title/Summary/Keyword: computing

Search Result 14,944, Processing Time 0.043 seconds

Benchmark Results of a Monte Carlo Treatment Planning system (몬데카를로 기반 치료계획시스템의 성능평가)

  • Cho, Byung-Chul
    • Progress in Medical Physics
    • /
    • v.13 no.3
    • /
    • pp.149-155
    • /
    • 2002
  • Recent advances in radiation transport algorithms, computer hardware performance, and parallel computing make the clinical use of Monte Carlo based dose calculations possible. To compare the speed and accuracies of dose calculations between different developed codes, a benchmark tests were proposed at the XIIth ICCR (International Conference on the use of Computers in Radiation Therapy, Heidelberg, Germany 2000). A Monte Carlo treatment planning comprised of 28 various Intel Pentium CPUs was implemented for routine clinical use. The purpose of this study was to evaluate the performance of our system using the above benchmark tests. The benchmark procedures are comprised of three parts. a) speed of photon beams dose calculation inside a given phantom of 30.5 cm$\times$39.5 cm $\times$ 30 cm deep and filled with 5 ㎣ voxels within 2% statistical uncertainty. b) speed of electron beams dose calculation inside the same phantom as that of the photon beams. c) accuracy of photon and electron beam calculation inside heterogeneous slab phantom compared with the reference results of EGS4/PRESTA calculation. As results of the speed benchmark tests, it took 5.5 minutes to achieve less than 2% statistical uncertainty for 18 MV photon beams. Though the net calculation for electron beams was an order of faster than the photon beam, the overall calculation time was similar to that of photon beam case due to the overhead time to maintain parallel processing. Since our Monte Carlo code is EGSnrc, which is an improved version of EGS4, the accuracy tests of our system showed, as expected, very good agreement with the reference data. In conclusion, our Monte Carlo treatment planning system shows clinically meaningful results. Though other more efficient codes are developed such like MCDOSE and VMC++, BEAMnrc based on EGSnrc code system may be used for routine clinical Monte Carlo treatment planning in conjunction with clustering technique.

  • PDF

4-way Search Window for Improving The Memory Bandwidth of High-performance 2D PE Architecture in H.264 Motion Estimation (H.264 움직임추정에서 고속 2D PE 아키텍처의 메모리대역폭 개선을 위한 4-방향 검색윈도우)

  • Ko, Byung-Soo;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.6
    • /
    • pp.6-15
    • /
    • 2009
  • In this paper, a new 4-way search window is designed for the high-performance 2D PE architecture in H.264 Motion Estimation(ME) to improve the memory bandwidth. While existing 2D PE architectures reuse the overlapped data of adjacent search windows scanned in 1 or 3-way, the new window utilizes the overlapped data of adjacent search windows as well as adjacent multiple scanning (window) paths to enhance the reusage of retrieved search window data. In order to scan adjacent windows and multiple paths instead of single raster and zigzag scanning of adjacent windows, bidirectional row and column window scanning results in the 4-way(up. down, left, right) search window. The proposed 4-way search window could improve the reuse of overlapped window data to reduce the redundancy access factor by 3.1, though the 1/3-way search window redundantly requires $7.7{\sim}11$ times of data retrieval. Thus, the new 4-way search window scheme enhances the memory bandwidth by $70{\sim}58%$ compared with 1/3-way search window. The 2D PE architecture in H.264 ME for 4-way search window consists of $16{\times}16$ pe array. computing the absolute difference between current and reference frames, and $5{\times}16$ reusage array, storing the overlapped data of adjacent search windows and multiple scanning paths. The reference data could be loaded upward and downward into the new 2D PE depending on scanning direction, and the reusage array is combined with the pe array rotating left as well as right to utilize the overlapped data of adjacent multiple scan paths. In experiments, the new implementation of 4-way search window on Magnachip 0.18um could deal with the HD($1280{\times}720$) video of 1 reference frame, $48{\times}48$ search area and $16{\times}16$ macroblock by 30fps at 149.25MHz.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

The Economic Effects of Tax Incentives for Housing Owners: An Overview and Policy Implications (주택소유자(住宅所有者)에 대한 조세감면(租稅減免)의 경제적(經濟的) 효과(效果) : 기존연구(旣存硏究)의 개관(槪觀) 및 정책시사점(政策示唆點))

  • Kim, Myong-sook
    • KDI Journal of Economic Policy
    • /
    • v.12 no.2
    • /
    • pp.135-149
    • /
    • 1990
  • Housing owners in Korea have a variety of tax advantages such as income tax exemption for the imputed rent of owner-occupied housing, exemption from the capital gains tax and deduction of the estate tax for one-house households. These tax reliefs for housing owners not only conflict with the principle of horizontal and vertical equity, but also lead to resource misallocation by distorting the housing market, and thus bring about regressive distribution effects. Particularly in the case of Korea with its imperfect capital market, these measures exacerbate the inter-class inequality of housing ownership as well as inequalities in wealth, by causing the affluent to demand needlessly large housing, while the poor and young experience difficulties in purchasing residential properties. Therefore, the Korean tax system must be altered as follows in order to disadvantage owner-occupiers, especially those owners of luxury housing. These alterations will promote housing-ownership, tax burden equity, efficiency of resource allocation, as well as the desirable distribution of income. First, income tax deductions for the rent payments of tenants are recommended. Ideally, the way of recovering the fiscal equivalence between the owner-occupiers and tenants is to levy an income tax on the former's imputed rents, and if necessary to give them tax credits. This, however, would be very difficult from a practical viewpoint, because the general public may perceive the concept of "imputed rent" as cumbersome. Computing the imputed rent also entails administrative costs, rendering quite reasonable, the continued exemption of imputed rent from taxation with the simultaneous deduction in the income tax for tenants. This would further enhance the administrative efficiency of income tax collection by easing assessment of the landlord's income. Second, a capital gains tax should be levied on the one-house household, except with the postponement of payments in the case that the seller purchases higher priced property. Exemption of the capital gains tax for the one-house household favors those who have more expensive housing, providing an incentive to the rich to hold even larger residences, and to the constructors to build more luxurious housing to meet the demand. So it is not desirable to sustain the current one-house household exemption while merely supplementing it with fastidious measures. Rather, the rule must be abolished completely with the concurrent reform of the deduction system and lowering of the tax rate, measures which the author believes will help optimize the capital gains tax incidence. Finally, discontinuation of the housing exemption for the heir is suggested. Consequent increases in the tax burden of the middle class could be mitigated by a reduction in the rate. This applies to the following specific exemptions as well, namely, for farm lands, meadows, woods, business fields-to foster horizontal equity, while denying speculation on land that leads to a loss in allocative efficiency. Moreover, imperfections in the Korean capital market have disallowed the provision of long term credit for housing seekers. Remedying these problems is essential to the promotion of greater housing ownership by the low and middle income classes. It is also certain that a government subsidy be focused on the poorest of the poor who cannot afford even to think of owning a housing.

  • PDF

An Efficient Heuristic for Storage Location Assignment and Reallocation for Products of Different Brands at Internet Shopping Malls for Clothing (의류 인터넷 쇼핑몰에서 브랜드를 고려한 상품 입고 및 재배치 방법 연구)

  • Song, Yong-Uk;Ahn, Byung-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.129-141
    • /
    • 2010
  • An Internet shopping mall for clothing operates a warehouse for packing and shipping products to fulfill its orders. All the products in the warehouse are put into the boxes of same brands and the boxes are stored in a row on shelves equiped in the warehouse. To make picking and managing easy, boxes of the same brands are located side by side on the shelves. When new products arrive to the warehouse for storage, the products of a brand are put into boxes and those boxes are located adjacent to the boxes of the same brand. If there is not enough space for the new coming boxes, however, some boxes of other brands should be moved away and then the new coming boxes are located adjacent in the resultant vacant spaces. We want to minimize the movement of the existing boxes of other brands to another places on the shelves during the warehousing of new coming boxes, while all the boxes of the same brand are kept side by side on the shelves. Firstly, we define the adjacency of boxes by looking the shelves as an one dimensional series of spaces to store boxes, i.e. cells, tagging the series of cells by a series of numbers starting from one, and considering any two boxes stored in the cells to be adjacent to each other if their cell numbers are continuous from one number to the other number. After that, we tried to formulate the problem into an integer programming model to obtain an optimal solution. An integer programming formulation and Branch-and-Bound technique for this problem may not be tractable because it would take too long time to solve the problem considering the number of the cells or boxes in the warehouse and the computing power of the Internet shopping mall. As an alternative approach, we designed a fast heuristic method for this reallocation problem by focusing on just the unused spaces-empty cells-on the shelves, which results in an assignment problem model. In this approach, the new coming boxes are assigned to each empty cells and then those boxes are reorganized so that the boxes of a brand are adjacent to each other. The objective of this new approach is to minimize the movement of the boxes during the reorganization process while keeping the boxes of a brand adjacent to each other. The approach, however, does not ensure the optimality of the solution in terms of the original problem, that is, the problem to minimize the movement of existing boxes while keeping boxes of the same brands adjacent to each other. Even though this heuristic method may produce a suboptimal solution, we could obtain a satisfactory solution within a satisfactory time, which are acceptable by real world experts. In order to justify the quality of the solution by the heuristic approach, we generate 100 problems randomly, in which the number of cells spans from 2,000 to 4,000, solve the problems by both of our heuristic approach and the original integer programming approach using a commercial optimization software package, and then compare the heuristic solutions with their corresponding optimal solutions in terms of solution time and the number of movement of boxes. We also implement our heuristic approach into a storage location assignment system for the Internet shopping mall.

The Effect of the Context Awareness Value on the Smartphone Adopter' Advertising Attitude (스마트폰광고 이용자의 광고태도에 영향을 미치는 상황인지가치에 관한 연구)

  • Yang, Chang-Gyu;Lee, Eui-Bang;Huang, Yunchu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.73-91
    • /
    • 2013
  • Advertising market has been facing new challenges due to dramatic change in advertising channels and the advent of innovative media such as mobile devices. Recent research related to mobile devices is mainly focused on the fact that mobile devices could identify users'physical location in real-time, and this sheds light on how location-based technology is utilized to achieve competitive advantage in advertising market. With the introduction of smartphone, the functionality of smartphone has become much more diverse and context awareness is one of the areas that require further study. This work analyses the influence of context awareness value resulted from the transformation of advertising channel in mobile communication market, and our research result reflects recent trend in advertising market environment which is not considered in previous studies. Many constructs has intensively been studied in the context of advertising channel in traditional marketing environment, and entertainment, irritation and information are considered to be the most widely accepted variables that has positive relationship with advertising value. Also, in smartphone advertisement, four main dimensions of context awareness value are recognized: identification, activity, timing and location. In this study, we assume that these four constructs has positive relationship with context awareness value. Finally, we propose that advertising value and context awareness value positively influence smartphone advertising attitude. Partial Least Squares (PLS) structural model is used in our theoretical research model to test proposed hypotheses. A well designed survey is conducted for college students in Korea, and reliability, convergent validity and discriminant validity of constructs and measurement indicators are carefully evaluated and the results show that reliability and validity are confirmed according to predefined statistical criteria. Goodness-of-fit of our research model is also supported. In summary, the results collectively suggest good measurement properties for the proposed research model. The research outcomes are as follows. First, information has positive impact on advertising value while entertainment and irritation have no significant impact. Information, entertainment and irritation together account for 38.8% of advertising value. Second, along with the change in advertising market due to the advent of smartphone, activity, timing and location have positive impact on context awareness value while identification has no significant impact. In addition, identification, activity, location and time together account for 46.3% of context awareness value. Third, advertising value and context awareness value both positively influence smartphone advertising attitude, and these two constructs explain 31.7% of the variability of smartphone advertising attitude. The theoretical implication of our research is as follows. First, the influence of entertainment and irritation is reduced which are known to be crucial factors according to previous studies related to advertising value, while the influence of information is increased. It indicates that smartphone users are not likely interested in entertaining effect of smartphone advertisement, and are insensitive to the inconvenience due to smartphone advertisement. Second, in today' ubiquitous computing environment, it is effective to provide differentiated advertising service by utilizing smartphone users'context awareness values such as identification, activity, timing and location in order to achieve competitive business advantage in advertising market. For practical implications, enterprises should provide valuable and useful information that might attract smartphone users by adopting differentiation strategy as smartphone users are sensitive to the information provided via smartphone. Also enterprises not only provide useful information but also recognize and utilize smarphone users' unique characteristics and behaviors by increasing context awareness values. In summary, our result implies that smartphone advertisement should be optimized by considering the needed information of smartphone users in order to maximize advertisement effect.

Automatic Text Extraction from News Video using Morphology and Text Shape (형태학과 문자의 모양을 이용한 뉴스 비디오에서의 자동 문자 추출)

  • Jang, In-Young;Ko, Byoung-Chul;Kim, Kil-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.479-488
    • /
    • 2002
  • In recent years the amount of digital video used has risen dramatically to keep pace with the increasing use of the Internet and consequently an automated method is needed for indexing digital video databases. Textual information, both superimposed and embedded scene texts, appearing in a digital video can be a crucial clue for helping the video indexing. In this paper, a new method is presented to extract both superimposed and embedded scene texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, a color image is converted into a gray-level image and applies contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose+CloseOpen)/2 morphological operations, maintaining text components using (OpenClose+CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, my method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.

Computing the Dosage and Analysing the Effect of Optimal Rechlorination for Adequate Residual Chlorine in Water Distribution System (배.급수관망의 잔류염소 확보를 위한 적정 재염소 주입량 산정 및 효과분석)

  • Kim, Do-Hwan;Lee, Doo-Jin;Kim, Kyoung-Pil;Bae, Chul-Ho;Joo, Hye-Eun
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.10
    • /
    • pp.916-927
    • /
    • 2010
  • In general water treatment process, the disinfection process by chlorine is used to prevent water borne disease and microbial regrowth in water distribution system. Because chlorines were reacted with organic matter, carcinogens such as disinfection by-products (DBPs) were produced in drinking water. Therefore, a suitable injection of chlorine is need to decrease DBPs. Rechlorination in water pipelines or reservoirs are recently increased to secure the residual chlorine in the end of water pipelines. EPANET 2.0 developed by the U.S. Environmental Protection Agency (EPA) is used to compute the optimal chlorine injection in water treatment plant and to predict the dosage of rechlorination into water distribution system. The bulk decay constant ($k_{bulk}$) was drawn by bottle test and the wall decay constant ($k_{wall}$) was derived from using systermatic analysis method for water quality modeling in target region. In order to predict water quality based on hydraulic analysis model, residual chlorine concentration was forecasted in water distribution system. The formation of DBPs such as trihalomethanes (THMs) was verified with chlorine dosage in lab-scale test. The bulk decay constant ($k_{bulk}$) was rapidly decreased with increasing temperature in the early time. In the case of 25 degrees celsius, the bulk decay constant ($k_{bulk}$) decreased over half after 25 hours later. In this study, there were able to calculate about optimal rechlorine dosage and select on profitable sites in the network map.

Least-Square Fitting of Intrinsic and Scattering Q Parameters (최소자승법(最小自乘法)에 의(衣)한 고유(固有) Q와 산란(散亂) Q의 측정(測定))

  • Kang, Ik Bum;McMechan, George A.;Min, Kyung Duck
    • Economic and Environmental Geology
    • /
    • v.27 no.6
    • /
    • pp.557-561
    • /
    • 1994
  • Q estimates are made by direct measurements of energy loss per cycle from primary P and S waves, as a function of frequency. Assuming that intrinsic Q is frequency independent and scattering Q is frequency dependent over the frequencies of interest, the relative contributions of each, to a total observed Q, may be estimated. Test examples are produced by computing viscoelastic synthetic seismograms using a pseudo spectral solution with inclusion of relaxation mechanisms (for intrinsic Q) and a fractal distribution of scatterers (for scattering Q). The composite theory implies that when the total Q for S-waves is smaller than that for P-waves (the usual situation), intrinsic Q is dominating; when it is larger, scattering Q is dominating. In the inverse problem, performed by a global least squares search, intrinsic $Q_p$ and $Q_s$ estimates are reliable and unique when their absolute values are sufficiently low that their effects are measurable in the data. Large $Q_p$ and $Q_s$ have no measurable effect and hence are not resolvable. Standard deviation of velocity $({\sigma})$ and scatterer size (A) are less unique as they exhibit a tradeoff as predicted by Blair's equation. For the P-waves, intrinsic and scattering contributions are of approximately the same importance, for S-waves, the intrinsic contributions dominate.

  • PDF

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Computation (가변 시간 뉴톤-랍손 부동소수점 역수 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.2 s.92
    • /
    • pp.95-102
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal which is widely used for a floating point division, calculates the reciprocal by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the reciprocal of a floating point number F, the algorithm repeats the following operations: '$'X_{i+1}=X=X_i*(2-e_r-F*X_i),\;i\in\{0,\;1,\;2,...n-1\}'$ with the initial value $'X_0=\frac{1}{F}{\pm}e_0'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 27 for the single precision floating point, and 57 for the double precision floating point. Let $'X_i=\frac{1}{F}+e_i{'}$, these is $'X_{i+1}=\frac{1}{F}-e_{i+1},\;where\;{'}e_{i+1}, is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to $'\frac{1}{F}{'}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables $(X_0=\frac{1}{F}{\pm}e_0)$ with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal unit. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia scientific computing, etc.