• Title/Summary/Keyword: Deep Indexing

Search Result 12, Processing Time 0.026 seconds

Exploring the temporal and spatial variability with DEEP-South observations: reduction pipeline and application of multi-aperture photometry

  • Shin, Min-Su;Chang, Seo-Won;Byun, Yong-Ik;Yi, Hahn;Kim, Myung-Jin;Moon, Hong-Kyu;Choi, Young-Jun;Cha, Sang-Mok;Lee, Yongseok
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.1
    • /
    • pp.70.1-70.1
    • /
    • 2018
  • The DEEP-South photometric census of small Solar System bodies is producing massive time-series data of variable, transient or moving objects as a by-product. To fully investigate unexplored variable phenomena, we present an application of multi-aperture photometry and FastBit indexing techniques to a portion of the DEEP-South year-one data. Our new pipeline is designed to do automated point source detection, robust high-precision photometry and calibration of non-crowded fields overlapped with area previously surveyed. We also adopt an efficient data indexing algorithm for faster access to the DEEP-South database. In this paper, we show some application examples of catalog-based variability searches to find new variable stars and to recover targeted asteroids. We discovered 21 new periodic variables including two eclipsing binary systems and one white dwarf/M dwarf pair candidate. We also successfully recovered astrometry and photometry of two near-earth asteroids, 2006 DZ169 and 1996 SK, along with the updated properties of their rotational signals (e.g., period and amplitude).

  • PDF

Intelligent missing persons index system Implementation based on the OpenCV image processing and TensorFlow Deep-running Image Processing

  • Baek, Yeong-Tae;Lee, Se-Hoon;Kim, Ji-Seong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.1
    • /
    • pp.15-21
    • /
    • 2017
  • In this paper, we present a solution to the problems caused by using only text - based information as an index element when a commercialized missing person indexing system indexes missing persons registered in the database. The existing system could not be used for the missing persons inquiry because it could not formalize the image of the missing person registered together when registering the missing person. To solve these problems, we propose a method to extract the similarity of images by using OpenCV image processing and TensorFlow deep - running image processing, and to process images of missing persons to process them into meaningful information. In order to verify the indexing method used in this paper, we constructed a Web server that operates to provide the information that is most likely to be needed to users first, using the image provided in the non - regular environment of the same subject as the search element.

Deep-Learning Approach for Text Detection Using Fully Convolutional Networks

  • Tung, Trieu Son;Lee, Gueesang
    • International Journal of Contents
    • /
    • v.14 no.1
    • /
    • pp.1-6
    • /
    • 2018
  • Text, as one of the most influential inventions of humanity, has played an important role in human life since ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications such as the text data extracted from images that can provide information for automatic annotation, indexing, language translation, and the assistance systems for impaired persons. Therefore, natural-scene text detection with active research topics regarding computer vision and document analysis is very important. Previous methods have poor performances due to numerous false-positive and true-negative regions. In this paper, a fully-convolutional-network (FCN)-based method that uses supervised architecture is used to localize textual regions. The model was trained directly using images wherein pixel values were used as inputs and binary ground truth was used as label. The method was evaluated using ICDAR-2013 dataset and proved to be comparable to other feature-based methods. It could expedite research on text detection using deep-learning based approach in the future.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

TENSILE STRENGTH OF LASER WELDED-TITANIUM AND GOLD ALLOYS (티타늄과 금합금의 레이저 용접부의 인장강도)

  • Song, Yun-Gwan;Ha, Il-Soo;Song, Kwang-Yeob
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.38 no.2
    • /
    • pp.200-213
    • /
    • 2000
  • Lasers have given dentistry a new rapid, economic, and accurate technique for metal joining. Although laser welding has been recommended as an accurate technique, there are some limitations with this technique. For example, the two joining surfaces must have a tight-fitting contact, which may be difficult to achieve in some situations. The tensile samples used for this study were made from a custom-made pure titanium and type III gold alloy plates. 27 of 33 specimens were sectioned perpendicular to their long axis with a carborundum disk and water coolant. Six specimens remained and served as the control group. A group of 6 specimens was posed as butt joints in custom parallel positioning device with a feeler gauge at each of three gaps : 0.00, 0.25. and 0.50mm. All specimens were then machined to produce a uniform cross-sectional dimension, none of the specimens was subjected to any subsequent form of heat treatment. Scanning electron microscopy was performed on representative tested specimens at fractured surfaces in both the parent metal and the weld. Vickers hardness was measured at the center of the welds with a micropenetrometer using a force of 300gm for 15 seconds. Measurement was made at approximately $200{\mu}m\;and\;500{\mu}m$ deep from each surface. One-way analysis of variance (ANOVA) and Scheffe's test was calculated to detect differences between groups. The purpose of this study is to compare the strength and properties of the joint achieved at various butt Joint gaps by the laser welding of type III gold alloy and pure titanium tensile specimens in an argon atmosphere. The results of this study were as follows : 1. When indexing and welding pure titanium, there was no decrease in ultimate tensile strength as compared with the unsectioned alloys for indexing gaps of 0.00 to 0.50mm, although with increasing gap size may come increased distortion (p>0.05). 2. When indexing and welding type III gold alloy, there were significant differences in ultimate tensile strength among groups with weld gaps of 0.00mm, 0.25 and 0.50mm, and the control group. Group with butt contact without weld gap demonstrated a significant higher ultimate tensile strength than groups with weld gaps of 0.25 and 0.50mm (p<0.05). 3. When indexing and welding the different metal combination of type III gold alloy and pure titanium, there were significant differences in ultimate tensile strength between groups with weld gaps of 0.00, 0.25, and 0.50mm. However, the mechanical properties of the welded joint would become too brittle to be acceptable clinically (p<0.05). 4. The presence of large pores in the laser welded joint appears to be the most important factor in controlling the tensile strength of the weld in both pure titanium and type III gold alloy.

  • PDF

Multi-aperture Photometry Pipeline for DEEP-South Data

  • Chang, Seo-Won;Byun, Yong-Ik;Kim, Myung-Jin;Moon, Hong-Kyu;Yim, Hong-Suh;Shin, Min-Su;Kang, Young-Woon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.41 no.1
    • /
    • pp.56.2-56.2
    • /
    • 2016
  • We present a multi-aperture photometry pipeline for DEEP-South (Deep Ecliptic Patrol of the Southern Sky) time-series data, written in C. The pipeline is designed to do robust high-precision photometry and calibration of non-crowded fields with a varying point-spread function, allowing for the wholesale search and characterization of both temporal and spatial variabilities. Our time-series photometry method consists of three parts: (i) extracting all point sources with several pixel/blind parameters, (ii) determining the optimized aperture for each source where we consider whether the measured flux within the aperture is contaminated by unwanted artifacts, and (iii) correcting position-dependent variations in the PSF shape across the mosaic CCD. In order to provide faster access to the resultant catalogs, we also utilize an efficient indexing technique using compressed bitmap indices (FastBit). Lastly, we focus on the development and application of catalog-based searches that aid the identification of high-probable single events from the indexed database. This catalog-based approach is still useful to identify new point-sources or moving objects in non-crowded fields. The performance of the pipeline is being tested on various sets of time-series data available in several archives: DEEP-South asteroid survey and HAT-South/MMT exoplanet survey data sets.

  • PDF

NEW PHOTOMETRIC PIPELINE TO EXPLORE TEMPORAL AND SPATIAL VARIABILITY WITH KMTNET DEEP-SOUTH OBSERVATIONS

  • Chang, Seo-Won;Byun, Yong-Ik;Shin, Min-Su;Yi, Hahn;Kim, Myung-Jin;Moon, Hong-Kyu;Choi, Young-Jun;Cha, Sang-Mok;Lee, Yongseok
    • Journal of The Korean Astronomical Society
    • /
    • v.51 no.5
    • /
    • pp.129-142
    • /
    • 2018
  • The DEEP-South (the Deep Ecliptic Patrol of the Southern Sky) photometric census of small Solar System bodies produces massive time-series data of variable, transient or moving objects as a by-product. To fully investigate unexplored variable phenomena, we present an application of multi-aperture photometry and FastBit indexing techniques for faster access to a portion of the DEEP-South year-one data. Our new pipeline is designed to perform automated point source detection, robust high-precision photometry and calibration of non-crowded fields which have overlap with previously surveyed areas. In this paper, we show some examples of catalog-based variability searches to find new variable stars and to recover targeted asteroids. We discover 21 new periodic variables with period ranging between 0.1 and 31 days, including four eclipsing binary systems (detached, over-contact, and ellipsoidal variables), one white dwarf/M dwarf pair candidate, and rotating variable stars. We also recover astrometry (< ${\pm}1-2$ arcsec level accuracy) and photometry of two targeted near-earth asteroids, 2006 DZ169 and 1996 SK, along with the small- (~0.12 mag) and relatively large-amplitude (~0.5 mag) variations of their dominant rotational signals in R-band.

Design of Object-based Information System Prototype

  • Yoo, Suhyeon;Shin, Sumi;Kim, Hyesun
    • International Journal of Knowledge Content Development & Technology
    • /
    • v.4 no.1
    • /
    • pp.79-91
    • /
    • 2014
  • Researchers who use science and technology information were found to ask an information service in which they can excerpt the contents they needed, rather than using the information at article level. In this study, we micronized the contents of scholarly articles into text, image, and table and then constructed a micro-content DB to design a new information system prototype based on this micro-content. After designing the prototype, we performed usability test for this prototype so as to confirm the usefulness of the system prototype. We expect that the outcome of this study will fulfill the segmented and diversified information need of researchers.

Development of Extracting System for Meaning·Subject Related Social Topic using Deep Learning (딥러닝을 통한 의미·주제 연관성 기반의 소셜 토픽 추출 시스템 개발)

  • Cho, Eunsook;Min, Soyeon;Kim, Sehoon;Kim, Bonggil
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.4
    • /
    • pp.35-45
    • /
    • 2018
  • Users are sharing many of contents such as text, image, video, and so on in SNS. There are various information as like as personal interesting, opinion, and relationship in social media contents. Therefore, many of recommendation systems or search systems are being developed through analysis of social media contents. In order to extract subject-related topics of social context being collected from social media channels in developing those system, it is necessary to develop ontologies for semantic analysis. However, it is difficult to develop formal ontology because social media contents have the characteristics of non-formal data. Therefore, we develop a social topic system based on semantic and subject correlation. First of all, an extracting system of social topic based on semantic relationship analyzes semantic correlation and then extracts topics expressing semantic information of corresponding social context. Because the possibility of developing formal ontology expressing fully semantic information of various areas is limited, we develop a self-extensible architecture of ontology for semantic correlation. And then, a classifier of social contents and feed back classifies equivalent subject's social contents and feedbacks for extracting social topics according semantic correlation. The result of analyzing social contents and feedbacks extracts subject keyword, and index by measuring the degree of association based on social topic's semantic correlation. Deep Learning is applied into the process of indexing for improving accuracy and performance of mapping analysis of subject's extracting and semantic correlation. We expect that proposed system provides customized contents for users as well as optimized searching results because of analyzing semantic and subject correlation.

Metadata Design and Machine Learning-Based Automatic Indexing for Efficient Data Management of Image Archives of Local Governments in South Korea (국내 지자체 사진 기록물의 효율적 관리를 위한 메타데이터 설계 및 기계학습 기반 자동 인덱싱 방법 연구)

  • Kim, InA;Kang, Young-Sun;Lee, Kyu-Chul
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.20 no.2
    • /
    • pp.67-83
    • /
    • 2020
  • Many local governments in Korea provide online services for people to easily access the audio-visual archives of events occurring in the area. However, the current method of managing these archives of the local governments has several problems in terms of compatibility with other organizations and convenience for searching of the archives because of the lack of standard metadata and the low utilization of image information. To solve these problems, we propose the metadata design and machine learning-based automatic indexing technology for the efficient management of the image archives of local governments in Korea. Moreover, we design metadata items specialized for the image archives of local governments to improve the compatibility and include the elements that can represent the basic information and characteristics of images into the metadata items, enabling efficient management. In addition, the text and objects in images, which include pieces of information that reflect events and categories, are automatically indexed based on the machine learning technology, enhancing users' search convenience. Lastly, we developed the program that automatically extracts text and objects from image archives using the proposed method, and stores the extracted contents and basic information in the metadata items we designed.