• Title/Summary/Keyword: Cross-matching

Search Result 263, Processing Time 0.027 seconds

A Study on Extraction Depth Information Using a Non-parallel Axis Image (사각영상을 이용한 물체의 고도정보 추출에 관한 연구)

  • 이우영;엄기문;박찬응;이쾌희
    • Korean Journal of Remote Sensing
    • /
    • v.9 no.2
    • /
    • pp.7-19
    • /
    • 1993
  • In stereo vision, when we use two parallel axis images, small portion of object is contained and B/H(Base-line to Height) ratio is limited due to the size of object and depth information is inaccurate. To overcome these difficulities we take a non-parallel axis image which is rotated $\theta$ about y-axis and match other parallel-axis image. Epipolar lines of non-parallel axis image are not same as those of parallel-axis image and we can't match these two images directly. In this paper, we transform the non-parallel axis image geometrically with camera parameters, whose epipolar lines are alingned parallel. NCC(Normalized Cross Correlation) is used as match measure, area-based matching technique is used find correspondence and 9$\times$9 window size is used, which is chosen experimentally. Focal length which is necessary to get depth information of given object is calculated with least-squares method by CCD camera characteristics and lenz property. Finally, we select 30 test points from given object whose elevation is varied to 150 mm, calculate heights and know that height RMS error is 7.9 mm.

Broadband multimode antenna and its array for wireless communication base stations

  • Wu, Rui;Chu, Qing-Xin
    • ETRI Journal
    • /
    • v.41 no.2
    • /
    • pp.167-175
    • /
    • 2019
  • A wideband dual-polarized antenna coupling cross resonator is proposed for LTE700/GSM850/GSM900 base stations. An additional resonance is introduced to obtain strong coupling between the dipole and resonator. Moreover, the input impedance of the proposed antenna is steadily close to $50{\Omega}$, which results in better impedance matching. Therefore, a wide bandwidth can be achieved with multiresonance. A prototype is fabricated to verify the proposed design. The measured results show that the antenna has a fractional bandwidth of 35.7% from 690 MHz to 990 MHz for ${\mid}S_{11}{\mid}$ < -15 dB. Stable radiation patterns as well as gain are also obtained over the entire operating band. Moreover, a five-element antenna array with an electrical downtilt of $0^{\circ}$to $14^{\circ}$ is developed for modern base station applications. Measurement shows that a wide impedance bandwidth of 34.7% (690 MHz to 980 MHz), stable HPBW (3-dB beamwidth) of $65{\pm}5^{\circ}$, and high gain of $13.8{\pm}0.6dBi$ are achieved with electrical downtilts of $0^{\circ}$, $7^{\circ}$, and $14^{\circ}$.

Hardware Accelerated Design on Bag of Words Classification Algorithm

  • Lee, Chang-yong;Lee, Ji-yong;Lee, Yong-hwan
    • Journal of Platform Technology
    • /
    • v.6 no.4
    • /
    • pp.26-33
    • /
    • 2018
  • In this paper, we propose an image retrieval algorithm for real-time processing and design it as hardware. The proposed method is based on the classification of BoWs(Bag of Words) algorithm and proposes an image search algorithm using bit stream. K-fold cross validation is used for the verification of the algorithm. Data is classified into seven classes, each class has seven images and a total of 49 images are tested. The test has two kinds of accuracy measurement and speed measurement. The accuracy of the image classification was 86.2% for the BoWs algorithm and 83.7% the proposed hardware-accelerated software implementation algorithm, and the BoWs algorithm was 2.5% higher. The image retrieval processing speed of BoWs is 7.89s and our algorithm is 1.55s. Our algorithm is 5.09 times faster than BoWs algorithm. The algorithm is largely divided into software and hardware parts. In the software structure, C-language is used. The Scale Invariant Feature Transform algorithm is used to extract feature points that are invariant to size and rotation from the image. Bit streams are generated from the extracted feature point. In the hardware architecture, the proposed image retrieval algorithm is written in Verilog HDL and designed and verified by FPGA and Design Compiler. The generated bit streams are stored, the clustering step is performed, and a searcher image databases or an input image databases are generated and matched. Using the proposed algorithm, we can improve convenience and satisfaction of the user in terms of speed if we search using database matching method which represents each object.

Effect of post processing of digital image correlation on obtaining accurate true stress-strain data for AISI 304L

  • Angel, Olivia;Rothwell, Glynn;English, Russell;Ren, James;Cummings, Andrew
    • Nuclear Engineering and Technology
    • /
    • v.54 no.9
    • /
    • pp.3205-3214
    • /
    • 2022
  • The aim of this study is to provide a clear and accessible method to obtain accurate true-stress strain data, and to extend the limited material data beyond the ultimate tensile strength (UTS) for AISI 304L. AISI 304L is used for the outer construction for some types of nuclear transport packages, due to its post-yield ductility and high failure strain. Material data for AISI 304L beyond UTS is limited throughout literature. 3D digital image correlation (DIC) was used during a series of uniaxial tensile experiments. Direct method extracted data such as true strain and instantaneous cross-sectional area throughout testing such that the true stress-strain response of the material up to failure could be created. Post processing of the DIC data has a considerable effect on the accuracy of the true stress-strain data produced. Influence of subset size and smoothing of data was investigated by using finite element analysis to inverse model the force displacement response in order to determine the true stress strain curve. The FE force displacement response was iteratively adapted, using subset size and smoothing of the DIC data. Results were validated by matching the force displacement response for the FE model and the experimental force displacement curve.

Development of a Method for Calculating the Allowable Storage Capacity of Rivers by Using Drone Images (드론 영상을 이용한 하천의 구간별 허용 저수량 산정 방법 개발)

  • Kim, Han-Gyeol;Kim, Jae-In;Yoon, Sung-Joo;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_1
    • /
    • pp.203-211
    • /
    • 2018
  • Dam discharge is carried out for the management of rivers and area around rivers due to rainy season or drought. Dam discharge should be based on an accurate understanding of the flow rate that can be accommodated in the river. Therefore, understanding the allowable storage capacity of river is an important factor in the management of the environment around the river. However, the methods using water level meters and images, which are currently used to determine the allowable flow rate of rivers, show limitations in terms of accuracy and efficiency. In order to solve these problems, this paper proposes a method to automatically calculate the allowable storage capacity of river based on the images taken by drone. In the first step, we create a 3D model of the river by using the drone images. This generation process consists of tiepoint extraction, image orientation, and image matching. In the second step, the allowable storage capacity is calculated by cross section analysis of the river using the generated river 3D model and the road and river layers in the target area. In this step, we determine the maximum water level of the river, extract the cross-sectional profile along the river, and use the 3D model to calculate the allowable storage capacity for the area. To prove our method, we used Bukhan river's data and as a result, the allowable storage volume was automatically extracted. It is expected that the proposed method will be useful for real - time management of rivers and surrounding areas and 3D models using drone.

Exploratory Case Study for Key Successful Factors of Producy Service System (Product-Service System(PSS) 성공과 실패요인에 관한 탐색적 사례 연구)

  • Park, A-Rum;Jin, Dong-Su;Lee, Kyoung-Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.255-277
    • /
    • 2011
  • Product Service System(PSS), which is an integrated combination of product and service, provides new value to customer and makes companies sustainable as well. The objective of this paper draws Critical Successful Factors(CSF) of PSS through multiple case study. First, we review various concepts and types in PSS and Platform business literature currently available on this topic. Second, after investigating various cases with the characteristics of PSS and platform business, we select four cases of 'iPod of Apple', 'Kindle of Amazon', 'Zune of Microsoft', and 'e-book reader of Sony'. Then, the four cases are categorized as successful and failed cases according to criteria of case selection and PSS classification. We consider two methodologies for the case selection, i.e., 'Strategies for the Selection of Samples and Cases' proposed by Bent(2006) and the seven case selection procedures proposed by Jason and John(2008). For case selection, 'Stratified sample and Paradigmatic cases' is adopted as one of several options for sampling. Then, we use the seven case selection procedures such as 'typical', 'diverse', 'extreme', 'deviant', 'influential', 'most-similar', and 'mostdifferent' and among them only three procedures of 'diverse', 'most?similar', and 'most-different' are applied for the case selection. For PSS classification, the eight PSS types, suggested by Tukker(2004), of 'product related', 'advice and consulancy', 'product lease', 'product renting/sharing', 'product pooling', 'activity management', 'pay per service unit', 'functional result' are utilized. We categorize the four selected cases as a product oriented group because the cases not only sell a product, but also offer service needed during the use phase of the product. Then, we analyze the four cases by using cross-case pattern that Eisenhardt(1991) suggested. Eisenhardt(1991) argued that three processes are required for avoiding reaching premature or even false conclusion. The fist step includes selecting categories of dimensions and finding within-group similarities coupled with intergroup difference. In the second process, pairs of cases are selected and listed. The second step forces researchers to find the subtle similarities and differences between cases. The third process is to divide the data by data source. The result of cross-case pattern indicates that the similarities of iPod and Kindle as successful cases are convenient user interface, successful plarform strategy, and rich contents. The differences between the successful cases are that, wheares iPod has been recognized as the culture code, Kindle has implemented a low price as its main strategy. Meanwhile, the similarities of Zune and PRS series as failed cases are lack of sufficient applications and contents. The differences between the failed cases are that, wheares Zune adopted an undifferentiated strategy, PRS series conducted high-price strategy. From the analysis of the cases, we generate three hypotheses. The first hypothesis assumes that a successful PSS system requires convenient user interface. The second hypothesis assumes that a successful PSS system requires a reciprocal(win/win) business model. The third hypothesis assumes that a successful PSS system requires sufficient quantities of applications and contents. To verify the hypotheses, we uses the cross-matching (or pattern matching) methodology. The methodology matches three key words (user interface, reciprocal business model, contents) of the hypotheses to the previous papers related to PSS, digital contents, and Information System (IS). Finally, this paper suggests the three implications from analyzed results. A successful PSS system needs to provide differentiated value for customers such as convenient user interface, e.g., the simple design of iTunes (iPod) and the provision of connection to Kindle Store without any charge. A successful PSS system also requires a mutually benefitable business model as Apple and Amazon implement a policy that provides a reasonable proft sharing for third party. A successful PSS system requires sufficient quantities of applications and contents.

The Influence of Various Factors upon the Membrane Filter Technique on Raw Water of the Nak-Dong River (낙동강 원수에 대한 대장균군 막여과시험법에 있어서 여러 인자가 결과에 미치는 영향)

  • Hyun, Jae-Yeoul;Yoon, Jong-Ho;Shin, Sang-Hee;Kim, Jong-Woo
    • Journal of Korean Society on Water Environment
    • /
    • v.25 no.2
    • /
    • pp.205-211
    • /
    • 2009
  • In this study, the membrane filter method was compared to the MPN method for the analysis of total coliforms from raw water using raw test waters and controls including 6 standard strains of coliforms, and the various factors were analyzed on the detection of general and fecal origin coliforms. The range of error rate for the detection of 5 standard strains using the membrane filter and the MPN methods was 0 to 6% and 45 to 133%, respectively. The error rate of the membrane filter method was lower than that of the MPN method. The membrane filter method (m-Endo) showed 10% (11 out of 111) of difference for the detection sensitivity of coliforms isolated from raw water compared to the MPN method (BGLB). The membrane filter method was less affected by the factors including temperature, turbidity, charcoals of powder form, contamination, and reverse pressure. In conclusion, the membrane filter method is a better method for the analysis of total coliforms from raw water than the MPN method, considering the accuracy of detection and the tolerance to various experimental factors.

Application-aware Routing Protocol Selection Scheme in Wireless Mesh Network (무선 메쉬 네트워크에서의 응용 서비스 인지 라우팅 프로토콜 선택 기법)

  • Choi, Hyo-Hyun;Shon, Tae-Shik;Park, Yong-Suk
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.6
    • /
    • pp.103-110
    • /
    • 2009
  • We propose a novel routing protocol selection scheme based on the application feature in wireless mesh network. Each application has its own feature such as its packet size. For example, text messenger generates short size packets and file transfer application generates long size packets. Routing protocols in wireless mesh network discover the route with different features. Some find shortest hop routes; others find the routes consisting of high bandwidth though they have more hops. The proposed scheme selects the routing protocol by matching the feature of routing protocol and that of application. This paper shows the system that we have developed for supporting mesh routing as well as the proposed scheme and experimental results.

Effects of abutment diameter, luting agent type, and re-cementation on the retention of implant-supported CAD/CAM metal copings over short abutments

  • Safari, Sina;Ghavam, Fereshteh Hosseini;Amini, Parviz;Yaghmaei, Kaveh
    • The Journal of Advanced Prosthodontics
    • /
    • v.10 no.1
    • /
    • pp.1-7
    • /
    • 2018
  • PURPOSE. The aim of this study was to evaluate the effects of abutment diameter, cement type, and re-cementation on the retention of implant-supported CAD/CAM metal copings over short abutments. MATERIALS AND METHODS. Sixty abutments with two different diameters, the height of which was reduced to 3 mm, were vertically mounted in acrylic resin blocks with matching implant analogues. The specimens were divided into 2 diameter groups: 4.5 mm and 5.5 mm (n=30). For each abutment a CAD/CAM metal coping was manufactured, with an occlusal loop. Each group was sub-divided into 3 sub-groups (n=10). In each subgroup, a different cement type was used: resin-modified glass-ionomer, resin cement and zinc-oxide-eugenol. After incubation and thermocycling, the removal force was measured using a universal testing machine at a cross-head speed of 0.5 mm/min. In zinc-oxide-eugenol group, after removal of the coping, the cement remnants were completely cleaned and the copings were re-cemented with resin cement and re-tested. Two-way ANOVA, post hoc Tukey tests, and paired t-test were used to analyze data (${\alpha}=.05$). RESULTS. The highest pulling force was registered in the resin cement group (414.8 N), followed by the re-cementation group (380.5 N). Increasing the diameter improved the retention significantly (P=.006). The difference in retention between the cemented and recemented copings was not statistically significant (P=.40). CONCLUSION. Resin cement provided retention almost twice as strong as that of the RMGI. Increasing the abutment diameter improved retention significantly. Re-cementation with resin cement did not exhibit any difference from the initial cementation with resin cement.

SuperDepthTransfer: Depth Extraction from Image Using Instance-Based Learning with Superpixels

  • Zhu, Yuesheng;Jiang, Yifeng;Huang, Zhuandi;Luo, Guibo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4968-4986
    • /
    • 2017
  • In this paper, we primarily address the difficulty of automatic generation of a plausible depth map from a single image in an unstructured environment. The aim is to extrapolate a depth map with a more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Our technique, which is fundamentally based on a preexisting DepthTransfer algorithm, transfers depth information at the level of superpixels. This occurs within a framework that replaces a pixel basis with one of instance-based learning. A vital superpixels feature enhancing matching precision is posterior incorporation of predictive semantic labels into the depth extraction procedure. Finally, a modified Cross Bilateral Filter is leveraged to augment the final depth field. For training and evaluation, experiments were conducted using the Make3D Range Image Dataset and vividly demonstrate that this depth estimation method outperforms state-of-the-art methods for the correlation coefficient metric, mean log10 error and root mean squared error, and achieves comparable performance for the average relative error metric in both efficacy and computational efficiency. This approach can be utilized to automatically convert 2D images into stereo for 3D visualization, producing anaglyph images that are visually superior in realism and simultaneously more immersive.