• Title/Summary/Keyword: point dataset

Search Result 195, Processing Time 0.028 seconds

Natural Image Segmentation Considering The Cyclic Property Of Hue Component (색상의 주기성을 고려한 자연영상 분할방법)

  • Nam, Hye-Young;Kim, Wook-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.16-25
    • /
    • 2009
  • In this paper we propose the block based image segmentation method using the cyclic properties of hue components in HSI color model. In proposed method we use center point instead of hue mean values as the hue representatives for regions in image segmentation considering hue cyclic properties and we also use directed distance for the hue difference among regions. Furthermore we devise the simple and effective method to get critical values through control parameter to reduce the complexity in the calculation of those in the conventional method. From the experimental results we found that the segmented regions in the proposed method is more natural than those in the conventional method especially in texture and red tone regions. In the simulation results the proposed method is better than the conventional methods in the in the evaluation of the human segmentation dataset presented Berkely Segmentation Database.

Study on Building Data Set Matching Considering Position Error (위치 오차를 고려한 건물 데이터 셋의 매칭에 관한 연구)

  • Kim, Ki-Rak;Huh, Yong;Yu, Ki-Yun
    • Spatial Information Research
    • /
    • v.19 no.2
    • /
    • pp.37-46
    • /
    • 2011
  • Recently in the field of GIS(Geographic Information System), data integration from various sources has become an important topic in order to use spatial data effectively. In general, the integration of spatial data is accomplished by navigating corresponding space object and combining the information interacting with each object. But it is very difficult to navigate an object which has correspondence with one in another dataset. Many matching methods have been studied for navigating spatial object. The purpose of this paper is development of method for searching correspondent spatial object considering local position error which is remained even after coordinate transform ation when two different building data sets integrated. To achieve this goal, we performed coordinate transformation and overlapped two data sets and generated blocks which have similar position error. We matched building objects within each block using similarity and ICP algorithm. Finally, we tested this method in the aspect of applicability.

The reliability of tablet computers in depicting maxillofacial radiographic landmarks

  • Tadinada, Aditya;Mahdian, Mina;Sheth, Sonam;Chandhoke, Taranpreet K;Gopalakrishna, Aadarsh;Potluri, Anitha;Yadav, Sumit
    • Imaging Science in Dentistry
    • /
    • v.45 no.3
    • /
    • pp.175-180
    • /
    • 2015
  • Purpose: This study was performed to evaluate the reliability of the identification of anatomical landmarks in panoramic and lateral cephalometric radiographs on a standard medical grade picture archiving communication system (PACS) monitor and a tablet computer (iPad 5). Materials and Methods: A total of 1000 radiographs, including 500 panoramic and 500 lateral cephalometric radiographs, were retrieved from the de-identified dataset of the archive of the Section of Oral and Maxillofacial Radiology of the University Of Connecticut School Of Dental Medicine. Major radiographic anatomical landmarks were independently reviewed by two examiners on both displays. The examiners initially reviewed ten panoramic and ten lateral cephalometric radiographs using each imaging system, in order to verify interoperator agreement in landmark identification. The images were scored on a four-point scale reflecting the diagnostic image quality and exposure level of the images. Results: Statistical analysis showed no significant difference between the two displays regarding the visibility and clarity of the landmarks in either the panoramic or cephalometric radiographs. Conclusion: Tablet computers can reliably show anatomical landmarks in panoramic and lateral cephalometric radiographs.

Design of Echo Classifier Based on Neuro-Fuzzy Algorithm Using Meteorological Radar Data (기상레이더를 이용한 뉴로-퍼지 알고리즘 기반 에코 분류기 설계)

  • Oh, Sung-Kwun;Ko, Jun-Hyun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.5
    • /
    • pp.676-682
    • /
    • 2014
  • In this paper, precipitation echo(PRE) and non-precipitaion echo(N-PRE)(including ground echo and clear echo) through weather radar data are identified with the aid of neuro-fuzzy algorithm. The accuracy of the radar information is lowered because meteorological radar data is mixed with the PRE and N-PRE. So this problem is resolved by using RBFNN and judgement module. Structure expression of weather radar data are analyzed in order to classify PRE and N-PRE. Input variables such as Standard deviation of reflectivity(SDZ), Vertical gradient of reflectivity(VGZ), Spin change(SPN), Frequency(FR), cumulation reflectivity during 1 hour(1hDZ), and cumulation reflectivity during 2 hour(2hDZ) are made by using weather radar data and then each characteristic of input variable is analyzed. Input data is built up from the selected input variables among these input variables, which have a critical effect on the classification between PRE and N-PRE. Echo judgment module is developed to do echo classification between PRE and N-PRE by using testing dataset. Polynomial-based radial basis function neural networks(RBFNNs) are used as neuro-fuzzy algorithm, and the proposed neuro-fuzzy echo pattern classifier is designed by combining RBFNN with echo judgement module. Finally, the results of the proposed classifier are compared with both CZ and DZ, as well as QC data, and analyzed from the view point of output performance.

The Unsupervised Learning-based Language Modeling of Word Comprehension in Korean

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.41-49
    • /
    • 2019
  • We are to build an unsupervised machine learning-based language model which can estimate the amount of information that are in need to process words consisting of subword-level morphemes and syllables. We are then to investigate whether the reading times of words reflecting their morphemic and syllabic structures are predicted by an information-theoretic measure such as surprisal. Specifically, the proposed Morfessor-based unsupervised machine learning model is first to be trained on the large dataset of sentences on Sejong Corpus and is then to be applied to estimate the information-theoretic measure on each word in the test data of Korean words. The reading times of the words in the test data are to be recruited from Korean Lexicon Project (KLP) Database. A comparison between the information-theoretic measures of the words in point and the corresponding reading times by using a linear mixed effect model reveals a reliable correlation between surprisal and reading time. We conclude that surprisal is positively related to the processing effort (i.e. reading time), confirming the surprisal hypothesis.

Fault Pattern Extraction Via Adjustable Time Segmentation Considering Inflection Points of Sensor Signals for Aircraft Engine Monitoring (센서 데이터 변곡점에 따른 Time Segmentation 기반 항공기 엔진의 고장 패턴 추출)

  • Baek, Sujeong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.86-97
    • /
    • 2021
  • As mechatronic systems have various, complex functions and require high performance, automatic fault detection is necessary for secure operation in manufacturing processes. For conducting automatic and real-time fault detection in modern mechatronic systems, multiple sensor signals are collected by internet of things technologies. Since traditional statistical control charts or machine learning approaches show significant results with unified and solid density models under normal operating states but they have limitations with scattered signal models under normal states, many pattern extraction and matching approaches have been paid attention. Signal discretization-based pattern extraction methods are one of popular signal analyses, which reduce the size of the given datasets as much as possible as well as highlight significant and inherent signal behaviors. Since general pattern extraction methods are usually conducted with a fixed size of time segmentation, they can easily cut off significant behaviors, and consequently the performance of the extracted fault patterns will be reduced. In this regard, adjustable time segmentation is proposed to extract much meaningful fault patterns in multiple sensor signals. By considering inflection points of signals, we determine the optimal cut-points of time segments in each sensor signal. In addition, to clarify the inflection points, we apply Savitzky-golay filter to the original datasets. To validate and verify the performance of the proposed segmentation, the dataset collected from an aircraft engine (provided by NASA prognostics center) is used to fault pattern extraction. As a result, the proposed adjustable time segmentation shows better performance in fault pattern extraction.

SIFT Image Feature Extraction based on Deep Learning (딥 러닝 기반의 SIFT 이미지 특징 추출)

  • Lee, Jae-Eun;Moon, Won-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.234-242
    • /
    • 2019
  • In this paper, we propose a deep neural network which extracts SIFT feature points by determining whether the center pixel of a cropped image is a SIFT feature point. The data set of this network consists of a DIV2K dataset cut into $33{\times}33$ size and uses RGB image unlike SIFT which uses black and white image. The ground truth consists of the RobHess SIFT features extracted by setting the octave (scale) to 0, the sigma to 1.6, and the intervals to 3. Based on the VGG-16, we construct an increasingly deep network of 13 to 23 and 33 convolution layers, and experiment with changing the method of increasing the image scale. The result of using the sigmoid function as the activation function of the output layer is compared with the result using the softmax function. Experimental results show that the proposed network not only has more than 99% extraction accuracy but also has high extraction repeatability for distorted images.

A Study on the Established Requirements for Records through Precedent Analysis: Focusing on "Inter-Korean Summit Meeting Minutes Deletion" Cases (판례 분석을 통한 기록의 성립 요건 검토: '남북정상회담회의록 삭제' 판례를 중심으로)

  • Lee, Cheolhwan;Zoh, Youngsam
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.21 no.1
    • /
    • pp.41-56
    • /
    • 2021
  • This study aims to analyze the court ruling on "Inter-Korean Summit Meeting Minutes Deletion," identify how the established requirements, concept, and scope for the records prescribed in the Public Records Management Act are applied in actual cases, and summarize the future tasks. It analyzes the "approval theory" as the point of establishment for records by the ruling means and how the meaning of approval is determined, and examines the difference between the e-jiwon System and the On-Nara System to understand the meaning of ruling clearly. Moreover, it analyzes how the "Invalidity of Public Documents Crime" in Article 141 in the Criminal Act influences record management. Based on such comprehensive case analyses, the study proposes what tasks the administrative agencies such as the National Archives of Korea and the Ministry of the Interior and Safety should perform.

Machine Learning Assisted Information Search in Streaming Video (기계학습을 이용한 동영상 서비스의 검색 편의성 향상)

  • Lim, Yeon-sup
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.361-367
    • /
    • 2021
  • Information search in video streaming services such as YouTube is replacing traditional information search services. To find desired detailed information in such a video, users should repeatedly navigate several points in the video, resulting in a waste of time and network traffic. In this paper, we propose a method to assist users in searching for information in a video by using DBSCAN clustering and LSTM. Our LSTM model is trained with a dataset that consists of user search sequences and their final target points categorized by DBSCAN clustering algorithm. Then, our proposed method utilizes the trained model to suggest an expected category for the user's desired target point based on a partial search sequence that can be collected at the beginning of the search. Our experiment results show that the proposed method successfully finds user destination points with 98% accuracy and 7s of the time difference by average.

Development of 3D Crop Segmentation Model in Open-field Based on Supervised Machine Learning Algorithm (지도학습 알고리즘 기반 3D 노지 작물 구분 모델 개발)

  • Jeong, Young-Joon;Lee, Jong-Hyuk;Lee, Sang-Ik;Oh, Bu-Yeong;Ahmed, Fawzy;Seo, Byung-Hun;Kim, Dong-Su;Seo, Ye-Jin;Choi, Won
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.1
    • /
    • pp.15-26
    • /
    • 2022
  • 3D open-field farm model developed from UAV (Unmanned Aerial Vehicle) data could make crop monitoring easier, also could be an important dataset for various fields like remote sensing or precision agriculture. It is essential to separate crops from the non-crop area because labeling in a manual way is extremely laborious and not appropriate for continuous monitoring. We, therefore, made a 3D open-field farm model based on UAV images and developed a crop segmentation model using a supervised machine learning algorithm. We compared performances from various models using different data features like color or geographic coordinates, and two supervised learning algorithms which are SVM (Support Vector Machine) and KNN (K-Nearest Neighbors). The best approach was trained with 2-dimensional data, ExGR (Excess of Green minus Excess of Red) and z coordinate value, using KNN algorithm, whose accuracy, precision, recall, F1 score was 97.85, 96.51, 88.54, 92.35% respectively. Also, we compared our model performance with similar previous work. Our approach showed slightly better accuracy, and it detected the actual crop better than the previous approach, while it also classified actual non-crop points (e.g. weeds) as crops.