• Title/Summary/Keyword: 고차원

Search Result 463, Processing Time 0.03 seconds

Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks (그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색)

  • Su-Youn Choi;Jong-Youel Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.649-654
    • /
    • 2023
  • This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search.

Characteristics of Water Level and Velocity Changes due to the Propagation of Bore (단파의 전파에 따른 수위 및 유속변화의 특성에 관한 연구)

  • Lee, Kwang Ho;Kim, Do Sam;Yeh, Harry
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.5B
    • /
    • pp.575-589
    • /
    • 2008
  • In the present work, we investigate the hydrodynamic behavior of a turbulent bore, such as tsunami bore and tidal bore, generated by the removal of a gate with water impounded on one side. The bore generation system is similar to that used in a general dam-break problem. In order to the numerical simulation of the formation and propagation of a bore, we consider the incompressible flows of two immiscible fluids, liquid and gas, governed by the Navier-Stokes equations. The interface tracking between two fluids is achieved by the volume-of-fluid (VOF) technique and the M-type cubic interpolated propagation (MCIP) scheme is used to solve the Navier-Stokes equations. The MCIP method is a low diffusive and stable scheme and is generally extended the original one-dimensional CIP to higher dimensions, using a fractional step technique. Further, large eddy simulation (LES) closure scheme, a cost-effective approach to turbulence simulation, is used to predict the evolution of quantities associated with turbulence. In order to verify the applicability of the developed numerical model to the bore simulation, laboratory experiments are performed in a wave tank. Comparisons are made between the numerical results by the present model and the experimental data and good agreement is achieved.

Dynamic Nonlinear Prediction Model of Univariate Hydrologic Time Series Using the Support Vector Machine and State-Space Model (Support Vector Machine과 상태공간모형을 이용한 단변량 수문 시계열의 동역학적 비선형 예측모형)

  • Kwon, Hyun-Han;Moon, Young-Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.3B
    • /
    • pp.279-289
    • /
    • 2006
  • The reconstruction of low dimension nonlinear behavior from the hydrologic time series has been an active area of research in the last decade. In this study, we present the applications of a powerful state space reconstruction methodology using the method of Support Vector Machines (SVM) to the Great Salt Lake (GSL) volume. SVMs are machine learning systems that use a hypothesis space of linear functions in a Kernel induced higher dimensional feature space. SVMs are optimized by minimizing a bound on a generalized error (risk) measure, rather than just the mean square error over a training set. The utility of this SVM regression approach is demonstrated through applications to the short term forecasts of the biweekly GSL volume. The SVM based reconstruction is used to develop time series forecasts for multiple lead times ranging from the period of two weeks to several months. The reliability of the algorithm in learning and forecasting the dynamics is tested using split sample sensitivity analyses, with a particular interest in forecasting extreme states. Unlike previously reported methodologies, SVMs are able to extract the dynamics using only a few past observed data points (Support Vectors, SV) out of the training examples. Considering statistical measures, the prediction model based on SVM demonstrated encouraging and promising results in a short-term prediction. Thus, the SVM method presented in this study suggests a competitive methodology for the forecast of hydrologic time series.

Comparative Analysis of Self-supervised Deephashing Models for Efficient Image Retrieval System (효율적인 이미지 검색 시스템을 위한 자기 감독 딥해싱 모델의 비교 분석)

  • Kim Soo In;Jeon Young Jin;Lee Sang Bum;Kim Won Gyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.519-524
    • /
    • 2023
  • In hashing-based image retrieval, the hash code of a manipulated image is different from the original image, making it difficult to search for the same image. This paper proposes and evaluates a self-supervised deephashing model that generates perceptual hash codes from feature information such as texture, shape, and color of images. The comparison models are autoencoder-based variational inference models, but the encoder is designed with a fully connected layer, convolutional neural network, and transformer modules. The proposed model is a variational inference model that includes a SimAM module of extracting geometric patterns and positional relationships within images. The SimAM module can learn latent vectors highlighting objects or local regions through an energy function using the activation values of neurons and surrounding neurons. The proposed method is a representation learning model that can generate low-dimensional latent vectors from high-dimensional input images, and the latent vectors are binarized into distinguishable hash code. From the experimental results on public datasets such as CIFAR-10, ImageNet, and NUS-WIDE, the proposed model is superior to the comparative model and analyzed to have equivalent performance to the supervised learning-based deephashing model. The proposed model can be used in application systems that require low-dimensional representation of images, such as image search or copyright image determination.

Speaker verification with ECAPA-TDNN trained on new dataset combined with Voxceleb and Korean (Voxceleb과 한국어를 결합한 새로운 데이터셋으로 학습된 ECAPA-TDNN을 활용한 화자 검증)

  • Keumjae Yoon;Soyoung Park
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.209-224
    • /
    • 2024
  • Speaker verification is becoming popular as a method of non-face-to-face identity authentication. It involves determining whether two voice data belong to the same speaker. In cases where the criminal's voice remains at the crime scene, it is vital to establish a speaker verification system that can accurately compare the two voice evidence. In this study, to achieve this, a new speaker verification system was built using a deep learning model for Korean language. High-dimensional voice data with a high variability like background noise made it necessary to use deep learning-based methods for speaker matching. To construct the matching algorithm, the ECAPA-TDNN model, known as the most famous deep learning system for speaker verification, was selected. A large dataset of the voice data, Voxceleb, collected from people of various nationalities without Korean. To study the appropriate form of datasets necessary for learning the Korean language, experiments were carried out to find out how Korean voice data affects the matching performance. The results showed that when comparing models learned only with Voxceleb and models learned with datasets combining Voxceleb and Korean datasets to maximize language and speaker diversity, the performance of learning data, including Korean, is improved for all test sets.

A Passport Recognition and face Verification Using Enhanced fuzzy ART Based RBF Network and PCA Algorithm (개선된 퍼지 ART 기반 RBF 네트워크와 PCA 알고리즘을 이용한 여권 인식 및 얼굴 인증)

  • Kim Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.17-31
    • /
    • 2006
  • In this paper, passport recognition and face verification methods which can automatically recognize passport codes and discriminate forgery passports to improve efficiency and systematic control of immigration management are proposed. Adjusting the slant is very important for recognition of characters and face verification since slanted passport images can bring various unwanted effects to the recognition of individual codes and faces. Therefore, after smearing the passport image, the longest extracted string of characters is selected. The angle adjustment can be conducted by using the slant of the straight and horizontal line that connects the center of thickness between left and right parts of the string. Extracting passport codes is done by Sobel operator, horizontal smearing, and 8-neighborhood contour tracking algorithm. The string of codes can be transformed into binary format by applying repeating binary method to the area of the extracted passport code strings. The string codes are restored by applying CDM mask to the binary string area and individual codes are extracted by 8-neighborhood contour tracking algerian. The proposed RBF network is applied to the middle layer of RBF network by using the fuzzy logic connection operator and proposing the enhanced fuzzy ART algorithm that dynamically controls the vigilance parameter. The face is authenticated by measuring the similarity between the feature vector of the facial image from the passport and feature vector of the facial image from the database that is constructed with PCA algorithm. After several tests using a forged passport and the passport with slanted images, the proposed method was proven to be effective in recognizing passport codes and verifying facial images.

  • PDF

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

Principal component analysis in C[11]-PIB imaging (주성분분석을 이용한 C[11]-PIB imaging 영상분석)

  • Kim, Nambeom;Shin, Gwi Soon;Ahn, Sung Min
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.1
    • /
    • pp.12-16
    • /
    • 2015
  • Purpose Principal component analysis (PCA) is a method often used in the neuroimagre analysis as a multivariate analysis technique for describing the structure of high dimensional correlation as the structure of lower dimensional space. PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of correlated variables into a set of values of linearly independent variables called principal components. In this study, in order to investigate the usefulness of PCA in the brain PET image analysis, we tried to analyze C[11]-PIB PET image as a representative case. Materials and Methods Nineteen subjects were included in this study (normal = 9, AD/MCI = 10). For C[11]-PIB, PET scan were acquired for 20 min starting 40 min after intravenous injection of 9.6 MBq/kg C[11]-PIB. All emission recordings were acquired with the Biograph 6 Hi-Rez (Siemens-CTI, Knoxville, TN) in three-dimensional acquisition mode. Transmission map for attenuation-correction was acquired using the CT emission scans (130 kVp, 240 mA). Standardized uptake values (SUVs) of C[11]-PIB calculated from PET/CT. In normal subjects, 3T MRI T1-weighted images were obtained to create a C[11]-PIB template. Spatial normalization and smoothing were conducted as a pre-processing for PCA using SPM8 and PCA was conducted using Matlab2012b. Results Through the PCA, we obtained linearly uncorrelated independent principal component images. Principal component images obtained through the PCA can simplify the variation of whole C[11]-PIB images into several principal components including the variation of neocortex and white matter and the variation of deep brain structure such as pons. Conclusion PCA is useful to analyze and extract the main pattern of C[11]-PIB image. PCA, as a method of multivariate analysis, might be useful for pattern recognition of neuroimages such as FDG-PET or fMRI as well as C[11]-PIB image.

  • PDF

Aspects of Chinese Poetry in Korea and Japan in the 18th and 19th Centuries, as Demonstrated by Kim Chang Heup and Kan Chazan (김창흡과 간챠잔을 통해서 본 18·19세기 한일 한시의 한 면모)

  • Choi, Kwi-muk
    • Journal of Korean Classical Literature and Education
    • /
    • no.34
    • /
    • pp.115-147
    • /
    • 2017
  • This paper compared and reviewed the poetic theories and Chinese poems of the Korean author Kim Chang Heup and his Japanese counterpart, Kan Chazan. Kim Chang Heup and Kan Chazan shared largely the same opinions on poetry, and both rejected archaism. First, they did not just copy High Tang poetry. Instead, they focused on the (sometimes trivial) scenery right in front of them, and described the calm feelings evoked by what they had seen. They also adopted a sincere tone, instead of an exaggerated one, because both believed that poetry should be realistic. However the differences between the two poets are also noteworthy. Kim Chang Heup claimed that feelings and scenery meet each other within a literary work through Natural Law, and the linguistic expressions that mediate the two are philosophical in nature. However, Kan Chazan did not use Natural Law as a medium between feelings and scenery. Instead the Japanese writer said the ideal poetical composition comes from a close observation and detailed description of scenery. In sum, while Kim Chang Heup continued to express reason through scenery, Kan Chazan did not go further than depicting the scenery itself. In addition, Kim Chang Heup believed poetry was not only a representation of Natural Law, but also a high-level linguistic activity that conveys a poetic concern about national politics. As a sadaebu (scholar-gentry), he held literature in high esteem because he thought that literature could achieve important outcomes. On the other hand, Kan Chazan regarded it as a form of entertainment, thereby insisting literature had its own territory that is separate from that of philosophy or politics. In other words, whereas Kim Chang Heup considered literature as something close to a form of learning, Kan Chazan viewed it as art. One might wonder whether the poetics of Kim Chang Heup and Kan Chazan reflect their individual accomplishments, or if the characteristics of Chinese poetry that Korean and Japanese poets had long sought after had finally surfaced in these two writers. This paper argued that the two authors' poetics represent characteristics of Chinese poetry in Korea and Japan, or general characteristics of Korean and Japanese literatures in a wider sense. Their request to depict actual scenery in a unique way, free from the ideal model of literature, must have facilitated an outward materialization of Korean and Japanese literary characteristics that had developed over a long time.

Comparison and Evaluation of radiotherapy plans by multi leaf collimator types of Linear accelerator (선형가속기의 다엽콜리메이터 형태에 따른 치료계획 비교 평가)

  • Lim, Ji Hye;Chang, Nam Joon;Seok, Jin Yong;Jung, Yun Ju;Won, Hui Su;Jung, Hae Youn;Choi, Byeong Don
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.129-138
    • /
    • 2018
  • Purpose : An aim of this study was to compare the effect of multi leaf collimator(MLC) types for high dimension radiotherapy in treatment sites used clinically. Material and Method : 70 patients with lung cancer, spine cancer, prostate cancer, whole pelvis, head and neck, breast cancer were included in this study. High definition(HD) MLC of TrueBeam STx (Varian Medical system, Palo Alto, CA) and millenium(M) MLC of VitalBeam (Varian Medical system, Palo Alto, CA) were used. Radiotherapy plans were performed for each patient under same treatment goals with Eclipse (Version 13.7, Varian Palo Alto USA, CA). To compare the indicators of the radiotherapy plans, planning target volume(PTV) coverage, conformity index(CI), homogeneity index(HI), and clinical indicators for each treatment sites in normal tissues were evaluated. To evaluate low dose distribution, $V_{30%}$ values were compared according to MLC types. Additionally, length and volume of targets for each treatment sites were investigated. Result : In stereotatictic body radiotherapy(SBRT) plan for lung, the average value of PTV coverage was reduced by 0.52 % with HD MLC. With SBRT plan using HD MLC for spine, the average value of PTV coverage decreased by 0.63 % and maximum dose decreased by 1.13 %. In the test of CI and HI, the values in SBRT plan with HD MLC for spine were 1.144, 1.079 and the values using M MLC were 1.160, 1.092 in SBRT plan for lung, The dose evaluation of critical organ was reduced by 1.48 % in the ipsilateral lung mean dose with HD MLC. In prostate cancer volumetric modulated arc therapy(VMAT) with HD MLC, the mean dose and the $V_{30}$ of bladder and the mean dose and the $V_{25}$ of rectum were reduced by 0.53 %, 1.42 %, 0.97 %, and 0.69 %, respectively (p<0.05). The average value of heart mean dose was reduced by 0.83 % in breast cancer VMAT with M MLC. Other assessment indices for treatment sites showed no significant difference between treatment plans with two types of MLC. Conclusion : Using HD MLC had a positive impact on the PTV coverage and normal tissue sparing in usually short or small targets such as lung and spine SBRT and prostate VMAT. But, there was no significant difference in targets with long and large such as lung, head and neck, and whole pelvis for VMAT.

  • PDF