• Title/Summary/Keyword: 3차원 데이터 모델

Search Result 748, Processing Time 0.025 seconds

Coverage Analysis of VHF Aviation Communication Network for Initial UAM Operations Considering Real Terrain Environments (실제 지형 환경을 고려한 초기 UAM 운용을 위한 VHF 항공통신 커버리지 분석)

  • Seul-Ae Gwon;Seung-Kyu Han;Young-Ho Jung
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.1
    • /
    • pp.102-108
    • /
    • 2024
  • In the initial stages of urban air mobility (UAM) operations, compliance with existing visual flight rules and instrument flight regulations for conventional human-crewed aircraft is crucial. Additionally, voice communication between the on board pilot and relevant UAM stakeholders, including vertiports, is essential. Consequently, very high frequency (VHF) aviation voice communication must be consistently provided throughout all phases of UAM operations. This paper presents the results of the VHF communication coverage analysis for the initial UAM demonstration areas, encompassing the Hangang River and Incheon Ara-Canal corridors, as well as potential vertiport candidate locations. By considering the influence of terrain and buildings through the utilization of a digital surface model (DSM), communication quality prediction results are obtained for the analysis areas. The three-dimensional coverage analysis results indicate that stable coverage can be achieved within altitude corridors ranging from 300 m to 600 m. However, there are shaded areas in the low-altitude vertiport regions due to the impact of high-rise buildings. Therefore, additional research to ensure stable coverage around vertiports in the lower altitude areas is required.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

An Empirical Analysis of Accelerator Investment Determinants: A Longitudinal Study on Investment Determinants and Investment Performance (액셀러레이터 투자결정요인 실증 분석: 투자결정요인과 투자성과에 대한 종단 연구)

  • Jin Young Joo;Jeong Min Nam
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.4
    • /
    • pp.1-20
    • /
    • 2023
  • This study attempted to identify the relationship between the investment determinants of accelerators and investment performance through empirical analysis. Through literature review, four dimensions and 12 measurement items were extracted for investment determinants, which are independent variables, and investment performance was adjusted to the cumulative amount of subsequent investment based on previous studies. Performance data from 594 companies selected by TIPS from 2017 to 2019, which are relatively reliable and easy to secure data, were collected, and the subsequent investment cumulative attraction amount, which is a dependent variable, was hypothesized through multiple regression analysis three years after the investment. As a result of the study, 'industrial experience years' in the characteristics of founders, 'market size', 'market growth', 'competitive strength', and 'number of patents' in the characteristics of products and services had a significant positive (+) effect. The impact of independent variables on dependent variables was most influenced by the competitive strength of market characteristics, followed by the number of years of industrial experience, the number of patents, the size of the market, and market growth. This was different from the results of previous studies conducted mainly on qualitative research methods, and in most previous studies, the characteristics of founders were the most important, but the empirical analysis results were market characteristics. As a sub-factor, the intensity of competition, which was the subordinate to the importance of previous studies, had the greatest influence in empirical analysis. The academic significance of this study is that it presented a specific methodology to collect and build 594 empirical samples in the absence of empirical research on accelerator investment determinants, and created an opportunity to expand the theoretical discussion of investment determinants through causal research. In practice, the information asymmetry and uncertainty of startups that accelerators have can help them make effective investment decisions by establishing a systematic model of experience-dependent investment determinants.

  • PDF

A Study on Evaluation of Visual Factor for Measuring Subjective Virtual Realization (주관적인 가상 실감화 측정 방법에 대한 시각적 요소 평가 연구)

  • Won, Myeung-Ju;Park, Sang-In;Kim, Chi-Jung;Lee, Eui-Chul;Whang, Min-Cheol
    • Science of Emotion and Sensibility
    • /
    • v.15 no.3
    • /
    • pp.389-398
    • /
    • 2012
  • Virtual worlds have pursued reality as if they actually exist. In order to evaluate the sense of reality in the computer-simulated worlds, several subjective questionnaires, which include specific independent variables, have been proposed in the literature. However, the questionnaires lack reliability and validity necessary for defining and measuring the virtual realization. Few studies have been conducted to investigate the effect of visual factors on the sense of reality experienced by exposing to a virtual environment. Therefore, this study was aimed at reinvestigating the variables and proposing a more reliable and advisable questionnaire for evaluating the virtual realization, focusing on visual factors. Twenty-one questions were gleaned from the literature and subjective interviews with focused groups. Exploratory factor analysis with oblique rotation was performed on the data obtained from 200 participants(females: 100) after exposing to a virtual character image described in an extreme way. After removing poorly loading items, remained subsets were subjected to confirmatory factor analysis on the data obtained from the same participants. As a result, 3 significant factors were determined to efficiently measure the virtual realization. The determined factors included visual presence(3 subset items), visual immersion(7 subset items), and visual interactivity(4 subset items). The proposed factors were verified by conducting a subjective evaluation in which participants were asked to evaluate a 3D virtual eyeball model based on the visual presence. The results implicated that the measurement method was suitable for evaluating the degree of the virtual realization. The proposed method is expected to reasonably measure the degree of the virtual realization.

  • PDF

Finite element analysis of the effects of mouthguard produced by combination of layers of different materials on teeth and jaw (다양한 물성을 혼용하여 제작된 구강보호장치가 치아 및 악골에 미치는 영향)

  • So, Woong-Seob;Lee, Hyun-Jong;Choi, Woo-Jin;Hong, Sung-Jin;Ryu, Kyung-Hee;Choi, Dae-Gyun
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.49 no.4
    • /
    • pp.324-332
    • /
    • 2011
  • Purpose: The purpose of this study was to compare the stress distribution of teeth and jaw on load by differentiating property of materials according to each layer of widely used mouthguard. Materials and methods: A Korean adult having normal cranium and mandible was selected to examine. A customized mouthguard was constructed by use of DRUFOMAT plate and DRUFOMAT-TE/-SQ of Dreve Co. according to Signature Mouthguard system. The cranium was scanned by means of computed tomography with 1mm interval. It was modeled with CANTIBio BIONIX/Body Builder program and simulated and interpreted using Alter HyperMesh program. The mouthguard was classified as follows according to the layers. (1) soft guard (Bioplast)(SG) (2) hard guard (Duran)(HG) (3) medium guard (Drufomat)(MG) (4) soft layer + hard layer (SG + HG) (5) hard layer + soft layer (HG + SG) (6) soft layer + hard layer + soft layer (SG + HG + SG) (7) hard layer + soft layer + hard layer (HG + SG + HG) The impact locations on mandible were gnathion, the center of inferior border, and the anterior edge of gonial angle. And the impact directions were oblique ($45^{\circ}$). The impact load was 800 N for 0.1 sec. The stress distribution was measured at maxillary teeth, TMJ and maxilla. The statistics were conducted using Repeated ANOVA and in case of difference, Duncan test was used as post analysis. Results: In teeth and maxilla, the mouthguard contacting soft layer of mandibular teeth presented lowest stress measure and, in contrast, in condyle, the mouthguard contacting hard layer of mandibular teeth presented lowest stress measure. Conclusion: For all impact directions, soft layer + hard layer + soft layer, the mouthguard with three layers which the hard layer is sandwiched between two soft layers, showed relatively even distribution of stress in impact.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Analysis on Factors Influencing Welfare Spending of Local Authority : Implementing the Detailed Data Extracted from the Social Security Information System (지방자치단체 자체 복지사업 지출 영향요인 분석 : 사회보장정보시스템을 통한 접근)

  • Kim, Kyoung-June;Ham, Young-Jin;Lee, Ki-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.141-156
    • /
    • 2013
  • Researchers in welfare services of local government in Korea have rather been on isolated issues as disables, childcare, aging phenomenon, etc. (Kang, 2004; Jung et al., 2009). Lately, local officials, yet, realize that they need more comprehensive welfare services for all residents, not just for above-mentioned focused groups. Still cases dealt with focused group approach have been a main research stream due to various reason(Jung et al., 2009; Lee, 2009; Jang, 2011). Social Security Information System is an information system that comprehensively manages 292 welfare benefits provided by 17 ministries and 40 thousand welfare services provided by 230 local authorities in Korea. The purpose of the system is to improve efficiency of social welfare delivery process. The study of local government expenditure has been on the rise over the last few decades after the restarting the local autonomy, but these studies have limitations on data collection. Measurement of a local government's welfare efforts(spending) has been primarily on expenditures or budget for an individual, set aside for welfare. This practice of using monetary value for an individual as a "proxy value" for welfare effort(spending) is based on the assumption that expenditure is directly linked to welfare efforts(Lee et al., 2007). This expenditure/budget approach commonly uses total welfare amount or percentage figure as dependent variables (Wildavsky, 1985; Lee et al., 2007; Kang, 2000). However, current practice of using actual amount being used or percentage figure as a dependent variable may have some limitation; since budget or expenditure is greatly influenced by the total budget of a local government, relying on such monetary value may create inflate or deflate the true "welfare effort" (Jang, 2012). In addition, government budget usually contain a large amount of administrative cost, i.e., salary, for local officials, which is highly unrelated to the actual welfare expenditure (Jang, 2011). This paper used local government welfare service data from the detailed data sets linked to the Social Security Information System. The purpose of this paper is to analyze the factors that affect social welfare spending of 230 local authorities in 2012. The paper applied multiple regression based model to analyze the pooled financial data from the system. Based on the regression analysis, the following factors affecting self-funded welfare spending were identified. In our research model, we use the welfare budget/total budget(%) of a local government as a true measurement for a local government's welfare effort(spending). Doing so, we exclude central government subsidies or support being used for local welfare service. It is because central government welfare support does not truly reflect the welfare efforts(spending) of a local. The dependent variable of this paper is the volume of the welfare spending and the independent variables of the model are comprised of three categories, in terms of socio-demographic perspectives, the local economy and the financial capacity of local government. This paper categorized local authorities into 3 groups, districts, and cities and suburb areas. The model used a dummy variable as the control variable (local political factor). This paper demonstrated that the volume of the welfare spending for the welfare services is commonly influenced by the ratio of welfare budget to total local budget, the population of infants, self-reliance ratio and the level of unemployment factor. Interestingly, the influential factors are different by the size of local government. Analysis of determinants of local government self-welfare spending, we found a significant effect of local Gov. Finance characteristic in degree of the local government's financial independence, financial independence rate, rate of social welfare budget, and regional economic in opening-to-application ratio, and sociology of population in rate of infants. The result means that local authorities should have differentiated welfare strategies according to their conditions and circumstances. There is a meaning that this paper has successfully proven the significant factors influencing welfare spending of local government in Korea.