• Title/Summary/Keyword: Automatic methodology

Search Result 308, Processing Time 0.033 seconds

Estimation of Longitudinal Dynamic Stability Derivatives for a Tailless Aircraft Using Dynamic Mesh Method (Dynamic Mesh 기법을 활용한 무미익 비행체 종축 동안정 미계수 예측)

  • Chung, Hyoung-Seog;Yang, Kwang-Jin;Kwon, Ky-Beom;Lee, Ho-Keun;Kim, Sun-Tae;Lee, Myung-Sup;Reu, Taekyu
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.43 no.3
    • /
    • pp.232-242
    • /
    • 2015
  • For stealth performance consideration, many UAV designs are adopting tailless lambda-shaped configurations which are likely to have unsteady dynamic characteristics. In order to control such UAVs through automatic flight control system, more accurate estimation of dynamic stability derivatives becomes essential. In this paper, dynamic stability derivatives of a tailless lambda-shaped UAV are estimated through numerically simulated forced oscillation method incorporating dynamic mesh technique. First, the methodology is validated by benchmarking the CFD results against previously published experimental results of the Standard Dynamics Model(SDM). The dependency of initial angle of attack, oscillation frequency and oscillation magnitude on the dynamic stability derivatives of a tailless UAV configuration is then studied. The results show reasonable agreements with experimental reference data and prove the validity and efficiency of the concept of using CFD to estimate the dynamic derivatives.

Personal Information Detection by Using Na$\ddot{i}$ve Bayes Methodology (Na$\ddot{i}$ve Bayes 방법론을 이용한 개인정보 분류)

  • Kim, Nam-Won;Park, Jin-Soo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.91-107
    • /
    • 2012
  • As the Internet becomes more popular, many people use it to communicate. With the increasing number of personal homepages, blogs, and social network services, people often expose their personal information online. Although the necessity of those services cannot be denied, we should be concerned about the negative aspects such as personal information leakage. Because it is impossible to review all of the past records posted by all of the people, an automatic personal information detection method is strongly required. This study proposes a method to detect or classify online documents that contain personal information by analyzing features that are common to personal information related documents and learning that information based on the Na$\ddot{i}$ve Bayes algorithm. To select the document classification algorithm, the Na$\ddot{i}$ve Bayes classification algorithm was compared with the Vector Space classification algorithm. The result showed that Na$\ddot{i}$ve Bayes reveals more excellent precision, recall, F-measure, and accuracy than Vector Space does. However, the measurement level of the Na$\ddot{i}$ve Bayes classification algorithm is still insufficient to apply to the real world. Lewis, a learning algorithm researcher, states that it is important to improve the quality of category features while applying learning algorithms to some specific domain. He proposes a way to incrementally add features that are dependent on related documents and in a step-wise manner. In another experiment, the algorithm learns the additional dependent features thereby reducing the noise of the features. As a result, the latter experiment shows better performance in terms of measurement than the former experiment does.

Development of the Algorithm for Traffic Accident Auto-Detection in Signalized Intersection (신호교차로 내 실시간 교통사고 자동검지 알고리즘 개발)

  • O, Ju-Taek;Im, Jae-Geuk;Hwang, Bo-Hui
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.5
    • /
    • pp.97-111
    • /
    • 2009
  • Image-based traffic information collection systems have entered widespread adoption and use in many countries since these systems are not only capable of replacing existing loop-based detectors which have limitations in management and administration, but are also capable of providing and managing a wide variety of traffic related information. In addition, these systems are expanding rapidly in terms of purpose and scope of use. Currently, the utilization of image processing technology in the field of traffic accident management is limited to installing surveillance cameras on locations where traffic accidents are expected to occur and digitalizing of recorded data. Accurately recording the sequence of situations around a traffic accident in a signal intersection and then objectively and clearly analyzing how such accident occurred is more urgent and important than anything else in resolving a traffic accident. Therefore, in this research, we intend to present a technology capable of overcoming problems in which advanced existing technologies exhibited limitations in handling real-time due to large data capacity such as object separation of vehicles and tracking, which pose difficulties due to environmental diversities and changes at a signal intersection with complex traffic situations, as pointed out by many past researches while presenting and implementing an active and environmentally adaptive methodology capable of effectively reducing false detection situations which frequently occur even with the Gaussian complex model analytical method which has been considered the best among well-known environmental obstacle reduction methods. To prove that the technology developed by this research has performance advantage over existing automatic traffic accident recording systems, a test was performed by entering image data from an actually operating crossroad online in real-time. The test results were compared with the performance of other existing technologies.

Quality Improvement Method on Grammatical Errors of Information System Audit Report (정보시스템 감리보고서의 문법적 오류에 대한 품질 향상 방안)

  • Lee, Don Hee;Lee, Gwan Hyung;Moon, Jin Yong;Kim, Jeong Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.211-219
    • /
    • 2019
  • Accomplishing information system, techniques, methodology have been studied continuously and give much help to auditors who are using them. Additionally audit report which is the conclusion of accomplishing ISA(information system audit), has law of a basis and phase with ITA/EA Law(Electronic Government Law). This paper is for better quality of ISA report. But it has more errors about sentence and Grammatical structures. In this paper, to achieve quality improvement objectives, it is necessary to recognize the importance of an audit report by investigating on objectives, functionality, structures and usability of a report firstly, and a legal basis, the presence of report next. Several types of audit reports were chosen and the reports errors were divided into several categories and analyzed. After grasping reasons of those errors, the methods for fixing those errors and check-lists model was provided. And based on that foundation, the effectiveness validation about real audit reports was performed. The necessity for efforts to improve the quality of audit reports was emphasized and further research subject(AI Automatic tool) of this paper conclusion. We also expect this paper to be useful for the organization to improve on ISA in the future.

Automatic Geo-referencing of Sequential Drone Images Using Linear Features and Distinct Points (선형과 특징점을 이용한 연속적인 드론영상의 자동기하보정)

  • Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.1
    • /
    • pp.19-28
    • /
    • 2019
  • Images captured by drone have the advantage of quickly constructing spatial information in small areas and are applied to fields that require quick decision making. If an image registration technique that can automatically register the drone image on the ortho-image with the ground coordinate system is applied, it can be used for various analyses. In this study, a methodology for geo-referencing of a single image and sequential images using drones was proposed even if they differ in spatio-temporal resolution using linear features and distinct points. Through the method using linear features, projective transformation parameters for the initial geo-referencing between images were determined, and then finally the geo-referencing of the image was performed through the template matching for distinct points that can be extracted from the images. Experimental results showed that the accuracy of the geo-referencing was high in an area where relief displacement of the terrain was not large. On the other hand, there were some errors in the quantitative aspect of the area where the change of the terrain was large. However, it was considered that the results of geo-referencing of the sequential images could be fully utilized for the qualitative analysis.

Collision Risk Assessment by using Hierarchical Clustering Method and Real-time Data (계층 클러스터링과 실시간 데이터를 이용한 충돌위험평가)

  • Vu, Dang-Thai;Jeong, Jae-Yong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.4
    • /
    • pp.483-491
    • /
    • 2021
  • The identification of regional collision risks in water areas is significant for the safety of navigation. This paper introduces a new method of collision risk assessment that incorporates a clustering method based on the distance factor - hierarchical clustering - and uses real-time data in case of several surrounding vessels, group methodology and preliminary assessment to classify vessels and evaluate the basis of collision risk evaluation (called HCAAP processing). The vessels are clustered using the hierarchical program to obtain clusters of encounter vessels and are combined with the preliminary assessment to filter relatively safe vessels. Subsequently, the distance at the closest point of approach (DCPA) and time to the closest point of approach (TCPA) between encounter vessels within each cluster are calculated to obtain the relation and comparison with the collision risk index (CRI). The mathematical relationship of CRI for each cluster of encounter vessels with DCPA and TCPA is constructed using a negative exponential function. Operators can easily evaluate the safety of all vessels navigating in the defined area using the calculated CRI. Therefore, this framework can improve the safety and security of vessel traffic transportation and reduce the loss of life and property. To illustrate the effectiveness of the framework proposed, an experimental case study was conducted within the coastal waters of Mokpo, Korea. The results demonstrated that the framework was effective and efficient in detecting and ranking collision risk indexes between encounter vessels within each cluster, which allowed an automatic risk prioritization of encounter vessels for further investigation by operators.

A Study on Automatic Calculation of Earth-volume Using 3D Model of B-Rep Solid Structure (B-Rep Solid 구조의 3차원 모델을 이용한 토공량 자동 산정에 관한 연구)

  • Kim, Jong Nam;Um, Dae Yong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.403-412
    • /
    • 2022
  • As the 4th industrial revolution is in full swing and next-generation ICT(Information & Communications Technology) convergence technology is being developed, various smart construction technologies are being rapidly introduced in the construction field to respond to technological changes. In particular, since the earth-volume calculation process for site design accounts for a large part of the design cost at the construction site, related researches are being actively conducted to improve the efficiency of the process and accurately calculate the earth-volume. The purpose of this study is to present a method for quickly constructing the topography of a construction site in 3D and efficiently calculating earth-volume using the results. For this purpose, the construction site was constructed as a 3D realistic model using large-scale aerial photos obtained from UAV(Unmanned Aerial Vehicle). At this time, since the constructed 3D realistic model has a surface model structure in which volume calculation is impossible, the structure was converted into a 3D solid model to enable volume calculation. And we devised a methodology to calculate earth-volume based on CAD(Computer-Aided Design and Drafting) using the converted solid model. Automatically calculating earth-volume from the solid model by applying the method. As a result, It was possible to confirm a relative deviation of 1.52% from the calculated earth-volume from the existing survey results. In addition, as a result of comparative analysis of the process time required for each method, it was confirmed that the time required is reduced of 60%. The technique presented in this study is expected to be utilized as a technology for smart construction management, such as periodic site monitoring throughout the entire construction process, as well as cost reduction for earth-volume calculation.

Prediction of Dormant Customer in the Card Industry (카드산업에서 휴면 고객 예측)

  • DongKyu Lee;Minsoo Shin
    • Journal of Service Research and Studies
    • /
    • v.13 no.2
    • /
    • pp.99-113
    • /
    • 2023
  • In a customer-based industry, customer retention is the competitiveness of a company, and improving customer retention improves the competitiveness of the company. Therefore, accurate prediction and management of potential dormant customers is paramount to increasing the competitiveness of the enterprise. In particular, there are numerous competitors in the domestic card industry, and the government is introducing an automatic closing system for dormant card management. As a result of these social changes, the card industry must focus on better predicting and managing potential dormant cards, and better predicting dormant customers is emerging as an important challenge. In this study, the Recurrent Neural Network (RNN) methodology was used to predict potential dormant customers in the card industry, and in particular, Long-Short Term Memory (LSTM) was used to efficiently learn data for a long time. In addition, to redefine the variables needed to predict dormant customers in the card industry, Unified Theory of Technology (UTAUT), an integrated technology acceptance theory, was applied to redefine and group the variables used in the model. As a result, stable model accuracy and F-1 score were obtained, and Hit-Ratio proved that models using LSTM can produce stable results compared to other algorithms. It was also found that there was no moderating effect of demographic information that could occur in UTAUT, which was pointed out in previous studies. Therefore, among variable selection models using UTAUT, dormant customer prediction models using LSTM are proven to have non-biased stable results. This study revealed that there may be academic contributions to the prediction of dormant customers using LSTM algorithms that can learn well from previously untried time series data. In addition, it is a good example to show that it is possible to respond to customers who are preemptively dormant in terms of customer management because it is predicted at a time difference with the actual dormant capture, and it is expected to contribute greatly to the industry.

A Study on Automatic Classification Model of Documents Based on Korean Standard Industrial Classification (한국표준산업분류를 기준으로 한 문서의 자동 분류 모델에 관한 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.221-241
    • /
    • 2018
  • As we enter the knowledge society, the importance of information as a new form of capital is being emphasized. The importance of information classification is also increasing for efficient management of digital information produced exponentially. In this study, we tried to automatically classify and provide tailored information that can help companies decide to make technology commercialization. Therefore, we propose a method to classify information based on Korea Standard Industry Classification (KSIC), which indicates the business characteristics of enterprises. The classification of information or documents has been largely based on machine learning, but there is not enough training data categorized on the basis of KSIC. Therefore, this study applied the method of calculating similarity between documents. Specifically, a method and a model for presenting the most appropriate KSIC code are proposed by collecting explanatory texts of each code of KSIC and calculating the similarity with the classification object document using the vector space model. The IPC data were collected and classified by KSIC. And then verified the methodology by comparing it with the KSIC-IPC concordance table provided by the Korean Intellectual Property Office. As a result of the verification, the highest agreement was obtained when the LT method, which is a kind of TF-IDF calculation formula, was applied. At this time, the degree of match of the first rank matching KSIC was 53% and the cumulative match of the fifth ranking was 76%. Through this, it can be confirmed that KSIC classification of technology, industry, and market information that SMEs need more quantitatively and objectively is possible. In addition, it is considered that the methods and results provided in this study can be used as a basic data to help the qualitative judgment of experts in creating a linkage table between heterogeneous classification systems.

Semi-automated Tractography Analysis using a Allen Mouse Brain Atlas : Comparing DTI Acquisition between NEX and SNR (알렌 마우스 브레인 아틀라스를 이용한 반자동 신경섬유지도 분석 : 여기수와 신호대잡음비간의 DTI 획득 비교)

  • Im, Sang-Jin;Baek, Hyeon-Man
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.2
    • /
    • pp.157-168
    • /
    • 2020
  • Advancements in segmentation methodology has made automatic segmentation of brain structures using structural images accurate and consistent. One method of automatic segmentation, which involves registering atlas information from template space to subject space, requires a high quality atlas with accurate boundaries for consistent segmentation. The Allen Mouse Brain Atlas, which has been widely accepted as a high quality reference of the mouse brain, has been used in various segmentations and can provide accurate coordinates and boundaries of mouse brain structures for tractography. Through probabilistic tractography, diffusion tensor images can be used to map comprehensive neuronal network of white matter pathways of the brain. Comparisons between neural networks of mouse and human brains showed that various clinical tests on mouse models were able to simulate disease pathology of human brains, increasing the importance of clinical mouse brain studies. However, differences between brain size of human and mouse brain has made it difficult to achieve the necessary image quality for analysis and the conditions for sufficient image quality such as a long scan time makes using live samples unrealistic. In order to secure a mouse brain image with a sufficient scan time, an Ex-vivo experiment of a mouse brain was conducted for this study. Using FSL, a tool for analyzing tensor images, we proposed a semi-automated segmentation and tractography analysis pipeline of the mouse brain and applied it to various mouse models. Also, in order to determine the useful signal-to-noise ratio of the diffusion tensor image acquired for the tractography analysis, images with various excitation numbers were compared.