• Title/Summary/Keyword: Database Algorithm

Search Result 1,655, Processing Time 0.034 seconds

Develpment of Analysis and Evaluation Model for a bus Transit Route Network Design (버스 노선망 설계를 위한 평가모형 개발)

  • Han, Jong-Hak;Lee, Seung-Jae;Kim, Jong-Hyeong
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.161-172
    • /
    • 2005
  • This study is to develop Bus Transit Route Analysis and Evaluation Model that can product the quantitative performance measures for Bus Transit Route Network Design. So far, in Korea, there are no so many models that evaluate a variety of other performance measures or service quality that are of concern to both the transit users and operator because of lower-level bus database system and transit route network analysis algorithm's limit. The BTRAEM in this research differ from the previous approach in that the BTRAEM employs a multiple path transit trip assignment model that explicitly considers the transfer and different travel time after boarding. And we develop input-output data structure and quantitative performance measure for the BTRAEM. In the numerical experimental applying BTRAEM to Mandl transit network, We got the meaningful results on performance measure of bus transit route network. In the future, we expect BTRAEM to give a good solution in real transit network.

Automated Geometric Correction of Geostationary Weather Satellite Images (정지궤도 기상위성의 자동기하보정)

  • Kim, Hyun-Suk;Lee, Tae-Yoon;Hur, Dong-Seok;Rhee, Soo-Ahm;Kim, Tae-Jung
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.4
    • /
    • pp.297-309
    • /
    • 2007
  • The first Korean geostationary weather satellite, Communications, Oceanography and Meteorology Satellite (COMS) will be launched in 2008. The ground station for COMS needs to perform geometric correction to improve accuracy of satellite image data and to broadcast geometrically corrected images to users within 30 minutes after image acquisition. For such a requirement, we developed automated and fast geometric correction techniques. For this, we generated control points automatically by matching images against coastline data and by applying a robust estimation called RANSAC. We used GSHHS (Global Self-consistent Hierarchical High-resolution Shoreline) shoreline database to construct 211 landmark chips. We detected clouds within the images and applied matching to cloud-free sub images. When matching visible channels, we selected sub images located in day-time. We tested the algorithm with GOES-9 images. Control points were generated by matching channel 1 and channel 2 images of GOES against the 211 landmark chips. The RANSAC correctly removed outliers from being selected as control points. The accuracy of sensor models established using the automated control points were in the range of $1{\sim}2$ pixels. Geometric correction was performed and the performance was visually inspected by projecting coastline onto the geometrically corrected images. The total processing time for matching, RANSAC and geometric correction was around 4 minutes.

Efficient Processing of k-Farthest Neighbor Queries for Road Networks

  • Kim, Taelee;Cho, Hyung-Ju;Hong, Hee Ju;Nam, Hyogeun;Cho, Hyejun;Do, Gyung Yoon;Jeon, Pilkyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.10
    • /
    • pp.79-89
    • /
    • 2019
  • While most research focuses on the k-nearest neighbors (kNN) queries in the database community, an important type of proximity queries called k-farthest neighbors (kFN) queries has not received much attention. This paper addresses the problem of finding the k-farthest neighbors in road networks. Given a positive integer k, a query object q, and a set of data points P, a kFN query returns k data objects farthest from the query object q. Little attention has been paid to processing kFN queries in road networks. The challenge of processing kFN queries in road networks is reducing the number of network distance computations, which is the most prominent difference between a road network and a Euclidean space. In this study, we propose an efficient algorithm called FANS for k-FArthest Neighbor Search in road networks. We present a shared computation strategy to avoid redundant computation of the distances between a query object and data objects. We also present effective pruning techniques based on the maximum distance from a query object to data segments. Finally, we demonstrate the efficiency and scalability of our proposed solution with extensive experiments using real-world roadmaps.

Identification of copy number variations using high density whole-genome single nucleotide polymorphism markers in Chinese Dongxiang spotted pigs

  • Wang, Chengbin;Chen, Hao;Wang, Xiaopeng;Wu, Zhongping;Liu, Weiwei;Guo, Yuanmei;Ren, Jun;Ding, Nengshui
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.12
    • /
    • pp.1809-1815
    • /
    • 2019
  • Objective: Copy number variations (CNVs) are a major source of genetic diversity complementary to single nucleotide polymorphism (SNP) in animals. The aim of the study was to perform a comprehensive genomic analysis of CNVs based on high density whole-genome SNP markers in Chinese Dongxiang spotted pigs. Methods: We used customized Affymetrix Axiom Pig1.4M array plates containing 1.4 million SNPs and the PennCNV algorithm to identify porcine CNVs on autosomes in Chinese Dongxiang spotted pigs. Then, the next generation sequence data was used to confirm the detected CNVs. Next, functional analysis was performed for gene contents in copy number variation regions (CNVRs). In addition, we compared the identified CNVRs with those reported ones and quantitative trait loci (QTL) in the pig QTL database. Results: We identified 871 putative CNVs belonging to 2,221 CNVRs on 17 autosomes. We further discarded CNVRs that were detected only in one individual, leaving us 166 CNVRs in total. The 166 CNVRs ranged from 2.89 kb to 617.53 kb with a mean value of 93.65 kb and a genome coverage of 15.55 Mb, corresponding to 0.58% of the pig genome. A total of 119 (71.69%) of the identified CNVRs were confirmed by next generation sequence data. Moreover, functional annotation showed that these CNVRs are involved in a variety of molecular functions. More than half (56.63%) of the CNVRs (n = 94) have been reported in previous studies, while 72 CNVRs are reported for the first time. In addition, 162 (97.59%) CNVRs were found to overlap with 2,765 previously reported QTLs affecting 378 phenotypic traits. Conclusion: The findings improve the catalog of pig CNVs and provide insights and novel molecular markers for further genetic analyses of Chinese indigenous pigs.

Object-Based Road Extraction from VHR Satellite Image Using Improved Ant Colony Optimization (개선된 개미 군집 최적화를 이용한 고해상도 위성영상에서의 객체 기반 도로 추출)

  • Kim, Han Sae;Choi, Kang Hyeok;Kim, Yong Il;Kim, Duk-Jin;Jeong, Jae Joon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.109-118
    • /
    • 2019
  • Road information is one of the most significant geospatial data for applications such as transportation, city planning, map generation, LBS (Location-Based Service), and GIS (Geographic Information System) database updates. Robust technologies to acquire and update accurate road information can contribute significantly to geospatial industries. In this study, we analyze the limitations of ACO (Ant Colony Optimization) road extraction, which is a recently introduced object-based road extraction method using high-resolution satellite images. Object-based ACO road extraction can efficiently extract road areas using both spectral and morphological information. This method, however, is highly dependent on object descriptor information and requires manual designations of descriptors. Moreover, reasonable iteration closing point needs to be specified. In this study, we perform improved ACO road extraction on VHR (Very High Resolution) optical satellite image by proposing an optimization stopping criteria and descriptors that complements the limitations of the existing method. The proposed method revealed 52.51% completeness, 6.12% correctness, and a 51.53% quality improvement over the existing algorithm.

Industrial Technology Leak Detection System on the Dark Web (다크웹 환경에서 산업기술 유출 탐지 시스템)

  • Young Jae, Kong;Hang Bae, Chang
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.46-53
    • /
    • 2022
  • Today, due to the 4th industrial revolution and extensive R&D funding, domestic companies have begun to possess world-class industrial technologies and have grown into important assets. The national government has designated it as a "national core technology" in order to protect companies' critical industrial technologies. Particularly, technology leaks in the shipbuilding, display, and semiconductor industries can result in a significant loss of competitiveness not only at the company level but also at the national level. Every year, there are more insider leaks, ransomware attacks, and attempts to steal industrial technology through industrial spy. The stolen industrial technology is then traded covertly on the dark web. In this paper, we propose a system for detecting industrial technology leaks in the dark web environment. The proposed model first builds a database through dark web crawling using information collected from the OSINT environment. Afterwards, keywords for industrial technology leakage are extracted using the KeyBERT model, and signs of industrial technology leakage in the dark web environment are proposed as quantitative figures. Finally, based on the identified industrial technology leakage sites in the dark web environment, the possibility of secondary leakage is detected through the PageRank algorithm. The proposed method accepted for the collection of 27,317 unique dark web domains and the extraction of 15,028 nuclear energy-related keywords from 100 nuclear power patents. 12 dark web sites identified as a result of detecting secondary leaks based on the highest nuclear leak dark web sites.

Application and development of a machine learning based model for identification of apartment building types - Analysis of apartment site characteristics based on main building shape - (머신러닝 기반 아파트 주동형상 자동 판별 모형 개발 및 적용 - 주동형상에 따른 아파트 개발 특성분석을 중심으로 -)

  • Sanguk HAN;Jungseok SEO;Sri Utami Purwaningati;Sri Utami Purwaningati;Jeongseob KIM
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.2
    • /
    • pp.55-67
    • /
    • 2023
  • This study aims to develop a model that can automatically identify the rooftop shape of apartment buildings using GIS and machine learning algorithms, and apply it to analyze the relationship between rooftop shape and characteristics of apartment complexes. A database of rooftop data for each building in an apartment complex was constructed using geospatial data, and individual buildings within each complex were classified into flat type, tower type, and mixed types using the random forest algorithm. In addition, the relationship between the proportion of rooftop shapes, development density, height, and other characteristics of apartment complexes was analyzed to propose the potential application of geospatial information in the real estate field. This study is expected to serve as a basic research on AI-based building type classification and to be utilized in various spatial and real estate analyses.

Structural Optimization and Improvement of Initial Weight Dependency of the Neural Network Model for Determination of Preconsolidation Pressure from Piezocone Test Result (피에조콘을 이용한 선행압밀하중 결정 신경망 모델의 구조 최적화 및 초기 연결강도 의존성 개선)

  • Kim, Young-Sang;Joo, No-Ah;Park, Hyun-Il;Park, Sol-Ji
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.3C
    • /
    • pp.115-125
    • /
    • 2009
  • The preconsolidation pressure has been commonly determined by oedometer test. However, it can also be determined by insitu test, such as piezocone test with theoretical and(or) empirical correlations. Recently, Neural Network (NN) theory was applied and some models were proposed to estimate the preconsolidation pressure or OCR. It was already found that NN model can come over the site dependency and prediction accuracy is greatly improved when compared with present theoretical and empirical models. However, since the optimization process of synaptic weights of NN model is dependent on the initial synaptic weights, NN models which are trained with different initial weights can't avoid the variability on prediction result for new database even though they have same structure and use same transfer function. In this study, Committee Neural Network (CNN) model is proposed to improve the initial weight dependency of multi-layered neural network model on the prediction of preconsolidation pressure of soft clay from piezocone test result. Prediction results of CNN model are compared with those of conventional empirical and theoretical models and multi-layered neural network model, which has the optimized structure. It was found that even though the NN model has the optimized structure for given training data set, it still has the initial weight dependency, while the proposed CNN model can improve the initial weight dependency of the NN model and provide a consistent and precise inference result than existing NN models.

AI-based stuttering automatic classification method: Using a convolutional neural network (인공지능 기반의 말더듬 자동분류 방법: 합성곱신경망(CNN) 활용)

  • Jin Park;Chang Gyun Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.71-80
    • /
    • 2023
  • This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Korean speakers who stutter. To this aim, speech data were collected from 9 adults who stutter and 9 normally-fluent speakers. The data were automatically segmented at the phrasal level using Google Cloud speech-to-text (STT), and labels such as 'fluent', 'blockage', prolongation', and 'repetition' were assigned to them. Mel frequency cepstral coefficients (MFCCs) and the CNN-based classifier were also used for detecting and classifying each type of the stuttered disfluency. However, in the case of prolongation, five results were found and, therefore, excluded from the classifier model. Results showed that the accuracy of the CNN classifier was 0.96, and the F1-score for classification performance was as follows: 'fluent' 1.00, 'blockage' 0.67, and 'repetition' 0.74. Although the effectiveness of the automatic classification identifier was validated using CNNs to detect the stuttered disfluencies, the performance was found to be inadequate especially for the blockage and prolongation types. Consequently, the establishment of a big speech database for collecting data based on the types of stuttered disfluencies was identified as a necessary foundation for improving classification performance.

Study on Improving the Navigational Safety Evaluation Methodology based on Autonomous Operation Technology (자율운항기술 기반의 선박 통항 안전성 평가 방법론 개선 연구)

  • Jun-Mo Park
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.1
    • /
    • pp.74-81
    • /
    • 2024
  • In the near future, autonomous ships, ships controlled by shore remote control centers, and ships operated by navigators will coexist and operate the sea together. In the advent of this situation, a method is required to evaluate the safety of the maritime traffic environment. Therefore, in this study, a plan to evaluate the safety of navigation through ship control simulation was proposed in a maritime environment, where ships directly controlled by navigators and autonomous ships coexisted, using autonomous operation technology. Own ship was designed to have autonomous operational functions by learning the MMG model based on the six-DOF motion with the PPO algorithm, an in-depth reinforcement learning technique. The target ship constructed maritime traffic modeling data based on the maritime traffic data of the sea area to be evaluated and designed autonomous operational functions to be implemented in a simulation space. A numerical model was established by collecting date on tide, wave, current, and wind from the maritime meteorological database. A maritime meteorology model was created based on this and designed to reproduce maritime meteorology on the simulator. Finally, the safety evaluation proposed a system that enabled the risk of collision through vessel traffic flow simulation in ship control simulation while maintaining the existing evaluation method.