• Title/Summary/Keyword: Processing technique

Search Result 5,697, Processing Time 0.036 seconds

A Study on History and Archetype Technology of Goli-su in Korea (한국 고리수의 역사와 원형기술의 복원 연구)

  • Kim, Young-ran
    • Korean Journal of Heritage: History & Science
    • /
    • v.46 no.2
    • /
    • pp.4-25
    • /
    • 2013
  • Goli-su is the innovative special kind of the embroidery technique, which combines twining and interlacing skill with metal technology and makes the loops woven to each other with a strand. The loops floating on the space of the ground look like floating veins of sculpture and give people the feeling of the openwork. This kind of characteristic has some similarities with the lacework craft of Western Europe in texture and technique style, but it has its own features different from that of Western Europe. It mainly represents the splendid gloss with metallic materials in the Embroidered cloth, such as gold foil or wire. In the 10th century, early days of Goryo, we can see the basic Goli-su structure form of its initial period in the boy motif embroidery purse unearthed from the first level of Octagonal Nine-storied Pagoda of Woljeong-sa. In the Middle period of Joseon, there are several pieces of Goli-su embroidered relic called "Battle Flag of Goryo", which was taken by the Japanese in 1592 and is now in the Japanese temple. This piece is now converted into altar-table covers. In 18~19th century, two pairs of embroidered pillows in Joseon palace were kept intact, whose time and source are very accurate. The frame of the pillows was embroidered with Goli-su veins, and some gold foil papers were inserted into the inside. The triangle motif with silk was embroidered on the pillow. The stitch in the Needle-Looped embroidery is divided into three kinds according to comprehensive classification: 1. Goli-su ; 2. Goli-Kamgi-su ; 3. Goli-Saegim-su. From the 10th century newly establishing stage to the 13th century, Goli-su has appeared variational stitches and employed 2~3 dimensional color schemes gradually. According to the research of this thesis, we can still see this stitch in the embroidery pillow, which proves that Goli-suwas still kept in Korea in the 19th century. And in terms of the research achievement of this thesis, Archetype technology of Goli-su was restored. Han Sang-soo, Important Intangible Cultural Heritage No. 80 and Master of Embroidery already recreated the Korean relics of Goli-su in Joseon Dynasty. The Needle-Looped embriodery is the overall technological result of ancestral outstanding Metal craft, Twining and Interlacing craft, and Embroidery art. We should inherit, create, and seek the new direction in modern multi-dimensional and international industry societyon the basis of these research results. We can inherit the long history of embroidering, weaving, fiber processing, and expand the applications of other craft industries, and develop new advanced additional values of new dress material, fashion technology, ornament craft and artistic design. Thus, other crafts assist each other and broaden the expressive field to pursue more diversified formative beauty and beautify our life abundantly together.

Branching Path Query Processing for XML Documents using the Prefix Match Join (프리픽스 매취 조인을 이용한 XML 문서에 대한 분기 경로 질의 처리)

  • Park Young-Ho;Han Wook-Shin;Whang Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.452-472
    • /
    • 2005
  • We propose XIR-Branching, a novel method for processing partial match queries on heterogeneous XML documents using information retrieval(IR) techniques and novel instance join techniques. A partial match query is defined as the one having the descendent-or-self axis '//' in its path expression. In its general form, a partial match query has branch predicates forming branching paths. The objective of XIR-Branching is to efficiently support this type of queries for large-scale documents of heterogeneous schemas. XIR-Branching has its basis on the conventional schema-level methods using relational tables(e.g., XRel, XParent, XIR-Linear[21]) and significantly improves their efficiency and scalability using two techniques: an inverted index technique and a novel prefix match join. The former supports linear path expressions as the method used in XIR-Linear[21]. The latter supports branching path expressions, and allows for finding the result nodes more efficiently than containment joins used in the conventional methods. XIR-Linear shows the efficiency for linear path expressions, but does not handle branching path expressions. However, we have to handle branching path expressions for querying more in detail and general. The paper presents a novel method for handling branching path expressions. XIR-Branching reduces a candidate set for a query as a schema-level method and then, efficiently finds a final result set by using a novel prefix match join as an instance-level method. We compare the efficiency and scalability of XIR-Branching with those of XRel and XParent using XML documents crawled from the Internet. The results show that XIR-Branching is more efficient than both XRel and XParent by several orders of magnitude for linear path expressions, and by several factors for branching path expressions.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.

A preliminary study for development of an automatic incident detection system on CCTV in tunnels based on a machine learning algorithm (기계학습(machine learning) 기반 터널 영상유고 자동 감지 시스템 개발을 위한 사전검토 연구)

  • Shin, Hyu-Soung;Kim, Dong-Gyou;Yim, Min-Jin;Lee, Kyu-Beom;Oh, Young-Sup
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.1
    • /
    • pp.95-107
    • /
    • 2017
  • In this study, a preliminary study was undertaken for development of a tunnel incident automatic detection system based on a machine learning algorithm which is to detect a number of incidents taking place in tunnel in real time and also to be able to identify the type of incident. Two road sites where CCTVs are operating have been selected and a part of CCTV images are treated to produce sets of training data. The data sets are composed of position and time information of moving objects on CCTV screen which are extracted by initially detecting and tracking of incoming objects into CCTV screen by using a conventional image processing technique available in this study. And the data sets are matched with 6 categories of events such as lane change, stoping, etc which are also involved in the training data sets. The training data are learnt by a resilience neural network where two hidden layers are applied and 9 architectural models are set up for parametric studies, from which the architectural model, 300(first hidden layer)-150(second hidden layer) is found to be optimum in highest accuracy with respect to training data as well as testing data not used for training. From this study, it was shown that the highly variable and complex traffic and incident features could be well identified without any definition of feature regulation by using a concept of machine learning. In addition, detection capability and accuracy of the machine learning based system will be automatically enhanced as much as big data of CCTV images in tunnel becomes rich.

Improvement of 2-pass DInSAR-based DEM Generation Method from TanDEM-X bistatic SAR Images (TanDEM-X bistatic SAR 영상의 2-pass 위성영상레이더 차분간섭기법 기반 수치표고모델 생성 방법 개선)

  • Chae, Sung-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.847-860
    • /
    • 2020
  • The 2-pass DInSAR (Differential Interferometric SAR) processing steps for DEM generation consist of the co-registration of SAR image pair, interferogram generation, phase unwrapping, calculation of DEM errors, and geocoding, etc. It requires complicated steps, and the accuracy of data processing at each step affects the performance of the finally generated DEM. In this study, we developed an improved method for enhancing the performance of the DEM generation method based on the 2-pass DInSAR technique of TanDEM-X bistatic SAR images was developed. The developed DEM generation method is a method that can significantly reduce both the DEM error in the unwrapped phase image and that may occur during geocoding step. The performance analysis of the developed algorithm was performed by comparing the vertical accuracy (Root Mean Square Error, RMSE) between the existing method and the newly proposed method using the ground control point (GCP) generated from GPS survey. The vertical accuracy of the DInSAR-based DEM generated without correction for the unwrapped phase error and geocoding error is 39.617 m. However, the vertical accuracy of the DEM generated through the proposed method is 2.346 m. It was confirmed that the DEM accuracy was improved through the proposed correction method. Through the proposed 2-pass DInSAR-based DEM generation method, the SRTM DEM error observed by DInSAR was compensated for the SRTM 30 m DEM (vertical accuracy 5.567 m) used as a reference. Through this, it was possible to finally create a DEM with improved spatial resolution of about 5 times and vertical accuracy of about 2.4 times. In addition, the spatial resolution of the DEM generated through the proposed method was matched with the SRTM 30 m DEM and the TanDEM-X 90m DEM, and the vertical accuracy was compared. As a result, it was confirmed that the vertical accuracy was improved by about 1.7 and 1.6 times, respectively, and more accurate DEM generation was possible with the proposed method. If the method derived in this study is used to continuously update the DEM for regions with frequent morphological changes, it will be possible to update the DEM effectively in a short time at low cost.

A Study on Effective Management & Administration System for Deluxe Hotel Kitchen in Seoul Area. (관공호텔 조리직무의 분업과 통합에 따른 문제점과 개선방안에 관한 연구)

  • 라영선
    • Culinary science and hospitality research
    • /
    • v.1
    • /
    • pp.57-89
    • /
    • 1995
  • Despite prologed business stagnation of both international and domestic economy, hotel business as well as tourist industry has continuously been keeping growing, owing to increase of surplus income and world flowing population. During recent 4 years, growth rate of yearly mean in domestic hotels reached 9.9% and especially that of the superior class hotels 15.2%. In the composition of domestic tourist hotel's revenue, the earnings of guest rooms form 37.4%, on the other hand those of food & beverage 39.9%. This result is that our hotel business is concentrated on its interest in FOOD & BEVERAGE of which productivity per unit dimension can be increased to an unlimited extent and extent and superior class hotels strengthened in F&B are increasing in comparison with European or American hotels which are focused on guest rooms in their management. For value added rate of F&B is low as compared with increase of their earnings, they are interested in the management techniques which focus on rising the rate. As for the cost of Food & Beverage, personnel expenditure forms 36.5% and the direct materials 31.5%. Therefore how to manage personnel and materials costs which compose as much as 68% of total revenue will greatly affect net profit. We can say that an effective management technique in cost of Food & Beverage is one of the most important know-hows in hotel management. Especially management know-how for the Kitchen Department where the most of foods come out makes a great effects on various expenses, productivity and it is the achievement from hotel management. For the most of the hotel's top managers, they don't seriously take the fact that KITCHEN SYSTEM affects greatly total expenditure. This study starts from the point of recognizing the question of fundamental cause affecting tow largest cost elements incurred in Food & Beverage and trying to present an effective kitchen system. To settle the questions raised, I compared and analyzed productivity and cost of food & beverage and unit kitchen centered around superior class hotels in Seoul, which vary in Kitchen Systems. In order to attain the aforementioned study effectively purpose of this study, I compared Room-Service and Coffee-Shop Menu, flow of basic food in the kitchen, extent and result of division of labor and integration in the kitchen, scale of outlet kitchen, productivity, the turnover rate of food in store, food cost rate one another which all vary in Kitchen Systems. All these elements are compared and analyzed each other being divided into two main groups such as①. Main Production kitchen and Banquet Kitchen, and ②. coffee-shop kitchen and Room-service Kitchen. Therefore this study is to point out the problems in managing kitchens of superior class hotels which are different in systems. An effort was made to find out the better Kitchen System for superior deluxe hotels. I emphasize the followings on the proper scale of division of labor and integration of unit kitchen and a disposition plan for outlet kitchens of restaurant. First, KITCHEN SYSTEM as a sub-system of Hotel Management System is composed of sub-systems of outlet unit kitchen. Basic food materials are cooked and served for the guests while support kitchen and out restaurant kitchen interact organically each other. So Kitchen should be considered as a system composed of integrated sub-systems. Second, support and banquet kitchens should be integrated to be managed. And these unit kitchens have to be designed to be placed in the back of banquet rooms area. Third, coffee-shop kitchen and room-service kitchen should be integrated to be managed. Fourth, several unit business kitchens should be place on the same floor. Fifth, main production kitchens ought to be located near the loading duck, food store and large refrigerator. Sixth, considering the limits of supervision, duties should be adjusted as 12-20 cooks in two shifts a day for a sub-kitchen, and 18-30 cooks in three shifts a day so that labor division can be made. Last, I would like to two points for direction and task of future study. Firstly, I compare the effective income and increasing costs each other, which are incurred by increasing the use rate of the second processing materials for foods perched outside and through the results. I can find out the better points of the processing production and circulation system, and then I study this effects made on hotel kitchen system. Secondly, I can point out that more efficient kitchen system shall be established through comparing and analyzing the matter of amount of indirect costs and flow of food in different kitchen systems.

  • PDF

Enhancement of Immune Activity of Spirulina maxima by Low Temperature Ultrasonification Extraction (저온 초음파 추출에 의한 Spirulina maxima 면역활성 증진)

  • Oh, Sung-Ho;Han, Jae-Gun;Ha, Ji-Hye;Kim, Young;Jeong, Myoung-Hoon;Kim, Seong-Sub;Jeong, Hyang-Suk;Choi, Geun-Pyo;Park, Uk-Yeon;Kang, Do-Hyung;Lee, Hyeon-Yong
    • Korean Journal of Food Science and Technology
    • /
    • v.41 no.3
    • /
    • pp.313-319
    • /
    • 2009
  • The marine microalga Spirulina maxima was extracted using water or ethanol at 100 or $80^{\circ}C$ and by ultrasonification in water at $60^{\circ}C$. The ultrasonification technique generated the highest yield (19.8%). To be therapeutically useful, the extraction should yield a product with low cytotoxicity and high immunity against skin infections. The cytotoxicity of all extracts (1.0 mg/mL) was below 25%. Moreover, the cytotoxicity of the extract generated by ultrasonification was 5%. Extracts prepared in the described manners could inhibit hyaluronidase activity by up to 40% compared to the control. Increased growth of human B, T and NK cells and an increase in cytokine secretion were observed, confirming the interrelationship between both human immune and skin immune activity. The extract prepared by ultrasonification increased the growth of human B, T and NK cells up to $10.3{\times}10^4$ cells/mL, $11.3{\times}10^4$ cells/mL and $19.1{\times}10^4$ cells/mL, respectively. The extract prepared by ultrasonification also greatly increased the secretion of both IL-6 and $TNF-{\alpha}$. Moreover, it was estimated that protein, Na and leucine occupy a high ratio. Accordingly, this study has confirmed that extracts prepared as described have the potential to effectively increase skin immunity.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

A 13b 100MS/s 0.70㎟ 45nm CMOS ADC for IF-Domain Signal Processing Systems (IF 대역 신호처리 시스템 응용을 위한 13비트 100MS/s 0.70㎟ 45nm CMOS ADC)

  • Park, Jun-Sang;An, Tai-Ji;Ahn, Gil-Cho;Lee, Mun-Kyo;Go, Min-Ho;Lee, Seung-Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.46-55
    • /
    • 2016
  • This work proposes a 13b 100MS/s 45nm CMOS ADC with a high dynamic performance for IF-domain high-speed signal processing systems based on a four-step pipeline architecture to optimize operating specifications. The SHA employs a wideband high-speed sampling network properly to process high-frequency input signals exceeding a sampling frequency. The SHA and MDACs adopt a two-stage amplifier with a gain-boosting technique to obtain the required high DC gain and the wide signal-swing range, while the amplifier and bias circuits use the same unit-size devices repeatedly to minimize device mismatch. Furthermore, a separate analog power supply voltage for on-chip current and voltage references minimizes performance degradation caused by the undesired noise and interference from adjacent functional blocks during high-speed operation. The proposed ADC occupies an active die area of $0.70mm^2$, based on various process-insensitive layout techniques to minimize the physical process imperfection effects. The prototype ADC in a 45nm CMOS demonstrates a measured DNL and INL within 0.77LSB and 1.57LSB, with a maximum SNDR and SFDR of 64.2dB and 78.4dB at 100MS/s, respectively. The ADC is implemented with long-channel devices rather than minimum channel-length devices available in this CMOS technology to process a wide input range of $2.0V_{PP}$ for the required system and to obtain a high dynamic performance at IF-domain input signal bands. The ADC consumes 425.0mW with a single analog voltage of 2.5V and two digital voltages of 2.5V and 1.1V.