• 제목/요약/키워드: amount of computation

검색결과 604건 처리시간 0.023초

산소부화용 공업로의 운전조건이 열효율에 미치는 영향 (A Numerical Study on the Efficiency of an Industrial Furnace for Oxygen Combustion Conditions)

  • 김강민;이연경;안석기;김규보;유인;전충환
    • 에너지공학
    • /
    • 제24권3호
    • /
    • pp.82-88
    • /
    • 2015
  • 가열용 공업로는 설치 후에는 기본적인 로의 크기 및 열용량의 수정이 어려우므로 설계, 제작 설치 전 정확한 기본사양의 산출을 위한 통합적 설계 프로그램의 개발이 필요하다. 설계를 위한 프로그램을 개발하기에 앞서, 본 연구에서는 만족할 수 있는 연속식 가열로의 효율을 결정하기 위하여 로의 기본 사양에 의한 각 부의 손실열량을 계산하는 산술적 모듈을 제작하여 산소용 공업로에 적용하였다. 이를 통해 로의 설계 조건인 사용 연료 종류, 연소방식, 배기가스 재순환 유무에 따라 생산량이 110Ton/hour인 공업로의 효율 변화를 확인하였다. 순 산소 연소 조건에서는 공기 연소 조건보다 효율이 15% 이상 높아진 것을 확인하였다. COG(Coke Oven Gas)를 사용하였을 시, 천연가스를 연료로 할 때보다 상대적으로 효율이 상대적으로 높아졌다. 공기 연소에서는 예열 공기를 사용하였을 시 냉 공기보다 효율이 33% 정도 증가하였으나, 순 산소 조건에서는 배기가스 양이 감소하여 효율이 7% 상승하였다.

대용량 추론을 위한 분산환경에서의 가정기반진리관리시스템 (Distributed Assumption-Based Truth Maintenance System for Scalable Reasoning)

  • 바트셀렘;박영택
    • 정보과학회 논문지
    • /
    • 제43권10호
    • /
    • pp.1115-1123
    • /
    • 2016
  • 가정기반진리관리 시스템(ATMS)은 추론 시스템의 추론 과정을 저장하고 비단조추론을 지원할 수 있는 도구이다. 또한 의존기반 backtracking을 지원하므로 매우 넓은 공간 탐색 문제를 해결 할 수 있는 강력한 도구이다. 모든 추론 과정을 기록하고, 특정한 컨텍스트에서 지능형시스템의 Belief를 매우 빠르게 확인하고 비단조 추론 문제에 대한 해결책을 효율적으로 제공할 수 있게 한다. 그러나 최근 데이터의 양이 방대해지면서 기존의 단일 머신을 사용하는 경우 문제 해결 프로그램의 대용량의 추론과정을 저장하는 것이 불가능하게 되었다. 대용량 데이터에 대한 문제 해결 과정을 기록하는 것은 많은 연산과 메모리 오버헤드를 야기한다. 이러한 단점을 극복하기 위해 본 논문에서는 Apache Spark 환경에서 functional 및 객체지향 방식 기반의 점진적 컨텍스트 추론을 유지할 수 있는 방법을 제안한다. 이는 가정(Assumption)과 유도과정을 분산 환경에 저장하며, 실체화된 대용량 데이터셋의 변화를 효율적으로 수정가능하게 한다. 또한 ATMS의 Label, Environment를 분산 처리하여 대규모의 추론 과정을 효과적으로 관리할 수 있는 방안을 제시하고 있다. 제안하는 시스템의 성능을 측정하기 위해 5개의 노드로 구성된 클러스터에서 LUBM 데이터셋에 대한 OWL/RDFS 추론을 수행하고, 데이터의 추가, 설명, 제거에 대한 실험을 수행하였다. LUBM2000에 대하여 추론을 수행한 결과 80GB데이터가 추론되었고, ATMS에 적용하여 추가, 설명, 제거에 대하여 수초 내에 처리하는 성능을 보였다.

단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산 (Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve)

  • 최귀열
    • 한국농공학회지
    • /
    • 제7권1호
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Active VM Consolidation for Cloud Data Centers under Energy Saving Approach

  • Saxena, Shailesh;Khan, Mohammad Zubair;Singh, Ravendra;Noorwali, Abdulfattah
    • International Journal of Computer Science & Network Security
    • /
    • 제21권11호
    • /
    • pp.345-353
    • /
    • 2021
  • Cloud computing represent a new era of computing that's forms through the combination of service-oriented architecture (SOA), Internet and grid computing with virtualization technology. Virtualization is a concept through which every cloud is enable to provide on-demand services to the users. Most IT service provider adopt cloud based services for their users to meet the high demand of computation, as it is most flexible, reliable and scalable technology. Energy based performance tradeoff become the main challenge in cloud computing, as its acceptance and popularity increases day by day. Cloud data centers required a huge amount of power supply to the virtualization of servers for maintain on- demand high computing. High power demand increase the energy cost of service providers as well as it also harm the environment through the emission of CO2. An optimization of cloud computing based on energy-performance tradeoff is required to obtain the balance between energy saving and QoS (quality of services) policies of cloud. A study about power usage of resources in cloud data centers based on workload assign to them, says that an idle server consume near about 50% of its peak utilization power [1]. Therefore, more number of underutilized servers in any cloud data center is responsible to reduce the energy performance tradeoff. To handle this issue, a lots of research proposed as energy efficient algorithms for minimize the consumption of energy and also maintain the SLA (service level agreement) at a satisfactory level. VM (virtual machine) consolidation is one such technique that ensured about the balance of energy based SLA. In the scope of this paper, we explore reinforcement with fuzzy logic (RFL) for VM consolidation to achieve energy based SLA. In this proposed RFL based active VM consolidation, the primary objective is to manage physical server (PS) nodes in order to avoid over-utilized and under-utilized, and to optimize the placement of VMs. A dynamic threshold (based on RFL) is proposed for over-utilized PS detection. For over-utilized PS, a VM selection policy based on fuzzy logic is proposed, which selects VM for migration to maintain the balance of SLA. Additionally, it incorporate VM placement policy through categorization of non-overutilized servers as- balanced, under-utilized and critical. CloudSim toolkit is used to simulate the proposed work on real-world work load traces of CoMon Project define by PlanetLab. Simulation results shows that the proposed policies is most energy efficient compared to others in terms of reduction in both electricity usage and SLA violation.

LNG 자동차 연료 탱크의 열적 거동에 대한 예측 (Prediction of Thermal Behavior of Automotive LNG Fuel Tank)

  • 남궁규완;주석재
    • 대한기계학회논문집B
    • /
    • 제34권9호
    • /
    • pp.875-883
    • /
    • 2010
  • 본 연구에서 차량 탑재용 LNG 연료 탱크의 단열 성능과 연료 공급 능력 등을 예측하기 위하여, 내조와 외조 사이가 진공 단열된 2중 벽 구조이며 탱크 용량은 450$\ell$, 정상 운전조건은 800 kPa인 연료 탱크를 해석 대상으로 선택했으며, LNG의 물성치는 메탄($CH_4$)과 동일하다고 가정했다. 밀폐 저장기간의 연장을 위하여, 차폐 관을 제시했고 기존의 연료 탱크 저장 기간과 비교 해석했다. 또한 기관으로의 적절한 연료량 공급을 보장할 수 있는 탱크 내의 압력 유지를 위하여, 외부로부터 추가적인 열전달률을 예측했다. 이러한 계산을 위하여 압력 변화율과 전열률, 연료 출입률 간의 열역학 관계식을 유도했고, 선택한 연료 탱크 모델로부터 열저항을 근거한 계산식을 설정했다. 계산 결과에 의하면, 차폐된 관을 사용한 연료 탱크는 약 25~30% 이상의 저장기간이 연장되었고, 연료 압송 최소압력 유지를 위하여 외부에서 탱크로 공급되는 열전달에 적합한 운전조건도 결정할 수 있었다.

전력소비를 이용한 실물경기지수 개발에 관한 연구 (Electricity Consumption as an Indicator of Real Economic Status)

  • 오승환;김태중;곽동철
    • 유통과학연구
    • /
    • 제14권3호
    • /
    • pp.63-71
    • /
    • 2016
  • Purpose - A variety of indicators are used for the diagnosis of economic situation. However, most indicators explain the past economic situation because of the time difference between the measurement and announcement. This study aims to argue for the resurrection of an idea: electricity demand can be used as an indicator of economic activity. In addition, this study made an endeavor to develop a new Real Business Index(RBI) which could quickly represent the real economic condition based on the sales statistics of industrial and public electricity. Research design, data, and methodology - In this study monthly sales of industrial and public electricity from 2000 to 2015 was investigated to analyze the relationship between the economic condition and the amount of electricity consumption and to develop a new Real Business Index. To formulate the Index, this study followed next three steps. First, we decided the explanatory variables, period, and collected data. Second, after calculating the monthly changes for each variable, standardization and estimating the weighted value were conducted. Third, the computation of RBI finalized the development of empirical model. The principal component analysis was used to evaluate the weighted contribution ratio among 3 sectors and 17 data. Hodrick-Prescott filter analysis was used to verify the robustness of out model. Results - The empirical results are as follows. First, compatibility of the predictability between the new RBI and the existing monthly cycle of coincident composite index was extremely high. Second, two indexes had a high correlation of 0.7156. In addition, Hodrick-Prescott filter analysis demonstrated that two indexed also had accompany relationship. Third, when the changes of two indexes were compared, they were found that the times when the highest and the lowest point happened were similar, which suggested that it is possible to use the new RBI index as a complementing indicator in a sense that the RBI can explain the economic condition almost in real time. Conclusion - A new economic index which can explain the economic condition needs to be developed well and rapidly in a sense that it is useful to determine accurately the current economic condition to establish economic policy and corporate strategy. The salse of electricity has a close relationship with economic conditions because electricity is utilized as a main resource of industrial production. Furthermore, the result of the sales of electricity can be gathered almost in real time. This study applied the econometrics model to the statistics of the sales of industrial and public electricity. In conclusion, the new RBI index was highly related with the existing monthly economic indexes. In addition, the comparison between the RBI index and other indexes demonstrated that the direction of the economic change and the times when the highest and lowest points had happened were almost the same. Therefore, this RBI index can become the supplementary indicator of the official indicators published by Korean Bank or the statistics Korea.

최대 고유치 문제의 해를 이용한 적응 안테나 어레이와 CDMA 이동통신에의 응용 (Deisgn of adaptive array antenna for tracking the source of maximum power and its application to CDMA mobile communication)

  • 오정호;윤동운;최승원
    • 한국통신학회논문지
    • /
    • 제22권11호
    • /
    • pp.2594-2603
    • /
    • 1997
  • 본 논문은 적용적으로 빔패턴을 형성하는 방법을 제안한다. 제안 방법은 원하는 신호가 각 간섭신호에 비하여 파워가 현저히 크다는 조건하에서 - 정상적인 COMA 이동통신에서 이 조건은 칩상관기를 거친 후에 무조건 성립한다.- 신호대 잡음비(SNR)/신호대 간섭비(SIR)를 증가시키는 빔패턴을 제공하기때문에 통신채널의 용량의 증가 및 통신품질 향상을 꾀할 수 있다. 제안 방법의 주요 장점은 다음과 같이 나열할 수 있다. (1) 학습신호나 학습기간이 필요없다. (2) 신호간의 상관성으로 인하여 성능이 나빠지거나 절차가 복잡해지지 않는다, (3) 어레이를 구성하는 안테나의 수가 도달하는 신호들의 수보다 많지 않아도 된다. (4) 전체의 절차가 반복적이어서 신호원의 움직임으로 인하여 도달각이 변하는 경우에도 새로운 데이타로부터 새로운 빔패턴이 형성될 수 있다, (5) 전체 계산량이 기존 방법에 비하여 매우 작기 때문에, 매 스냅샷마다 실시간으로 빔패턴형성이 가능하다. 실제로, 새로운 웨이트를 구하는데 소요되는 계산량은 $N{\times}N$ 크기(N은 어레이를 구성하는 안테나의 수)의 자기상관행렬을 갱신하는 과정을 포함하여 $0(3N^2 + 12N)$이다. 자기 상관 행렬을 매 스냅샷 마디의 순시신호벡터로 근사화시키면 0(11N)으로 줄어들게 된다.

  • PDF

사물인터넷 응용을 위한 에지-포그 클라우드 기반 계층적 데이터 전달 방법의 설계 및 평가 (Design and Evaluation of an Edge-Fog Cloud-based Hierarchical Data Delivery Scheme for IoT Applications)

  • 배인한
    • 인터넷정보학회논문지
    • /
    • 제19권1호
    • /
    • pp.37-47
    • /
    • 2018
  • 사물인터넷 (Internet-of-Things, IoT) 장치들의 개수와 기능은 앞으로 기하급수적으로 증가하고 향상될 것이다. 그러한 장치들은 방대한 양의 시간에 제약을 받는 데이터를 생성할 수도 있다. IoT 상황에서, 데이터 관리는 데이터를 생성하는 객체와 장치 그리고 분석 목적과 서비스를 위해 그 데이터를 액세스하는 응용 사이의 중간 계층으로서의 역할을 해야 한다. 덧붙여, 대부분 IoT 서비스들은 데이터 가용성과 데이터 전달의 효율성을 증가시키기 위하여 호스트 중심 보다는 콘텐츠 중심이다. IoT는 모든 통신 장치들을 상호 연결할 것이고, 그리고 장치들과 객체들에 의해 생성된 또는 관련된 데이터를 글로벌하게 액세스할 수 있게 만든다. 또한 포그 컴퓨팅은 최종 사용자 근처의 네트워크 에지에서 데이터와 계산을 관리하고, 그리고 최종 사용자들에게 낮은 지연, 고대역폭, 지리적 분산으로 새로운 유형의 응용들과 서비스들을 제공한다. 본 논문에서는 시간 민감성을 보장하면서 효율적이고 신뢰적으로 IoT 데이터를 해당 IoT 응용들에게 전달하기 위하여 에지와 포그 컴퓨터 클라우드의 완전 분산 하이브리드 모델인 에지-포그 클라우드에 기반하고, 그리고 정보 중심 네트워크와 블룸 필터를 사용하는 $EFcHD^2$ (Edge-Fog cloud-based Hierarchical Data Delivery) 방법을 제안한다. $EFcHD^2$ 방법에서는 IoT 데이터의 특성인 지역성, 크기, 실시간성과 인기도 등을 고려하는 에지-포그 클라우드의 적절한 위치에 그 IoT 데이터의 복사본이나 에지 노드에 의해 전 처리된 특징 데이터를 저장한다. 그리고 제안하는 $EFcHD^2$ 방법의 성능을 분석적 모델로 평가하고, 그것을 성능을 포그 서버 기반 방법 그리고 CCN (Content-Centric Networking) 기반 데이터 전달 방법과 비교한다.

칼라 인접성과 기울기를 이용한 내용 기반 영상 검색 (Content-based Image Retrieval Using Color Adjacency and Gradient)

  • Jin, Hong-Yan;Lee, Ho-Young;Kim, Hee-Soo;Kim, Gi-Seok;Ha, Yeong-Ho
    • 대한전자공학회논문지SP
    • /
    • 제38권1호
    • /
    • pp.104-115
    • /
    • 2001
  • 본 논문에서는 칼라 인접성과 기울기를 이용한 새로운 내용 기반 영상 검색 방법을 제안한다. 칼라 영상의 특징 정보로 사용되는 칼라 히스토그램은 시점이나 영상의 회전등의 영향을 적게 받고 특징 정보의 계산이 간단하고 빠른 장점이 있지만 칼라의 위치 정보를 나타낼 수 없기 때문에 균일 양자화에 의해 비슷한 히스토그램을 가진 서로 다른 영상을 구별하지 못하고 특징 저장량이 많은 등 단점이 있다. 제안한 방법은 기존의 방법들에서 보편적으로 사용하는 양자화 대신 영상에서의 인접 화소의 칼라 변화량 즉 기울기를 계산하여 보다 정확한 색차를 구함으로써 비슷한 칼라가 서로 다르게 양자화됨으로 인한 오차를 감소시켰다. 동시에 영상의 주요 칼라 구성 특징을 나타나는 칼라 인접성 정보를 추출하여 이진 배열로 표시함으로써 특징 정보의 방대한 저장량을 줄이고 비교속도를 향상시켰다. 실험 결과 기존의 검색 방법에 비하여 제안한 방법은 적은 특징 저장 양으로 외부조건의 변화에 더욱 강건함을 보여주고 있다.

  • PDF