• Title/Summary/Keyword: Position detection

Search Result 1,494, Processing Time 0.024 seconds

Incorporation of RAPD linkage Map Into RFLP Map in Glycine max (L, ) Merr (콩의 RAPD 연관지도를 RFLP 연관지도와 합병)

  • Choi, In-Soo;Kim, Yong-Chul
    • Journal of Life Science
    • /
    • v.13 no.3
    • /
    • pp.280-290
    • /
    • 2003
  • The incorporation of RAPD markers into the previous classical and RFLP genetic linkage maps will facilitate the generation of a detailed genetic map by compensating for the lack of one type of marker in the region of interest. The objective of this paper was to present features we observed when we associated RAPD map from an intraspecific cross of a Glycine max$\times$G. max, 'Essex'$\times$PI 437654 with the public RFLP map developed from an interspecific cross of G. max$\times$G. soja. Among 27 linkage groups of RAPD map, eight linkage groups contained probe/enzyme combination RFLP markers, which allowed us the incorporation of RAPD markers into the public RFLP map. Map position rearrangement was observed. In incorporating L.G.C-3 into the public RFLP linkage group a1 and a2, both pSAC3 and pA136 region, and pA170/EcoRV and pB170/HindIII region were in opposite order, respectively. And, pk400 was localized 1.8 cM from pA96-1 and 8.4 cM from pB172 in the public RFLP map, but was localized 9.9 cM from i locus and 18.9 cM from pA85 in our study. A noticeable expansion of the map distances in the intraspecific cross of Essex and PI 437654 was also observed. Map distance between probes pA890 and pK493 in L.G.C-1 was 48.6 cM, but it was only 13.3 cM in the public RFLP map. The distances from the probe pB32-2 to pA670 and from pA670 to pA668 in L.G. C-2 were 50.9 cM and 31.7 cM, but they were 35.9 cM and 13.5 cM in the public RFLP map. The detection of duplicate loci from the same probe that were mapped on the same or/and different linkage group was another feature we observed.

Investigation of Intertidal Zone using TerraSAR-X (TerraSAR-X를 이용한 조간대 관측)

  • Park, Jeong-Won;Lee, Yoon-Kyung;Won, Joong-Sun
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.4
    • /
    • pp.383-389
    • /
    • 2009
  • The main objective of the research is a feasibility study on the intertidal zone using a X-band radar satellite, TerraSAR-X. The TerraSAR-X data have been acquired in the west coast of Korea where large tidal flats, Ganghwa and Yeongjong tidal flats, are developed. Investigations include: 1) waterline and backscattering characteristics of the high resolution X-band images in tidal flats; 2) polarimetric signature of halophytes (or salt marsh plants), specifically Suaeda japonica; and 3) phase and coherence of interferometric pairs. Waterlines from TerraSAR-X data satisfy the requirement of horizontal accuracy of 60 m that corresponds to 20 cm in average height difference while current other spaceborne SAR systems could not meet the requirement. HH-polarization was the best for extraction of waterline, and its geometric position is reliable due to the short wavelength and accurate orbit control of the TerraSAR-X. A halophyte or salt marsh plant, Suaeda japonica, is an indicator of local sea level change. From X-band ground radar measurements, a dual polarization of VV/VH-pol. is anticipated to be the best for detection of the plant with about 9 dB difference at 35 degree incidence angle. However, TerraSAR-X HH/TV dual polarization was turned to be more effective for salt marsh monitoring. The HH-HV value was the maximum of about 7.9 dB at 31.6 degree incidence angle, which is fairly consistent with the results of X-band ground radar measurement. The boundary of salt marsh is effectively traceable specifically by TerraSAR-X cross-polarization data. While interferometric phase is not coherent within normal tidal flat, areas of salt marsh where the landization is preceded show coherent interferometric phases regardless of seasons or tide conditions. Although TerraSAR-X interferometry may not be effective to directly measure height or changes in tidal flat surface, TanDEM-X or other future X-band SAR tandem missions within one-day interval would be useful for mapping tidal flat topography.

Development of Deep Learning Structure to Improve Quality of Polygonal Containers (다각형 용기의 품질 향상을 위한 딥러닝 구조 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.493-500
    • /
    • 2021
  • In this paper, we propose the development of deep learning structure to improve quality of polygonal containers. The deep learning structure consists of a convolution layer, a bottleneck layer, a fully connect layer, and a softmax layer. The convolution layer is a layer that obtains a feature image by performing a convolution 3x3 operation on the input image or the feature image of the previous layer with several feature filters. The bottleneck layer selects only the optimal features among the features on the feature image extracted through the convolution layer, reduces the channel to a convolution 1x1 ReLU, and performs a convolution 3x3 ReLU. The global average pooling operation performed after going through the bottleneck layer reduces the size of the feature image by selecting only the optimal features among the features of the feature image extracted through the convolution layer. The fully connect layer outputs the output data through 6 fully connect layers. The softmax layer multiplies and multiplies the value between the value of the input layer node and the target node to be calculated, and converts it into a value between 0 and 1 through an activation function. After the learning is completed, the recognition process classifies non-circular glass bottles by performing image acquisition using a camera, measuring position detection, and non-circular glass bottle classification using deep learning as in the learning process. In order to evaluate the performance of the deep learning structure to improve quality of polygonal containers, as a result of an experiment at an authorized testing institute, it was calculated to be at the same level as the world's highest level with 99% good/defective discrimination accuracy. Inspection time averaged 1.7 seconds, which was calculated within the operating time standards of production processes using non-circular machine vision systems. Therefore, the effectiveness of the performance of the deep learning structure to improve quality of polygonal containers proposed in this paper was proven.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.