• Title/Summary/Keyword: 텍스트 구성

Search Result 867, Processing Time 0.024 seconds

Case Study : Cinematography using Digital Human in Tiny Virtual Production (초소형 버추얼 프로덕션 환경에서 디지털 휴먼을 이용한 촬영 사례)

  • Jaeho Im;Minjung Jang;Sang Wook Chun;Subin Lee;Minsoo Park;Yujin Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.21-31
    • /
    • 2023
  • In this paper, we introduce a case study of cinematography using digital human in virtual production. This case study deals with the system overview of virtual production using LEDs and an efficient filming pipeline using digital human. Unlike virtual production using LEDs, which mainly project the background on LEDs, in this case, we use digital human as a virtual actor to film scenes communicating with a real actor. In addition, to film the dialogue scene between the real actor and the digital human using a real-time engine, we automatically generated speech animation of the digital human in advance by applying our Korean lip-sync technology based on audio and text. We verified this filming case by using a real-time engine to produce short drama content using real actor and digital human in an LED-based virtual production environment.

The Discourse associated with mental illness on TV documentaries : The Completion of Distinction (TV 다큐멘터리가 생성한 정신장애 담론 : 구별짓기의 완성)

  • Chang, Hae Kyung;Woo, Ah Young
    • Korean Journal of Social Welfare Studies
    • /
    • v.42 no.1
    • /
    • pp.179-217
    • /
    • 2011
  • This paper discusses the type of discourse associated with mental illness and individuals with mental illness in the context of TV documentary. Discourse is an linguistic product which prescribes and interprets the reality and reconstructs the reality systematically. Therefore, TV documentary contents illuminate the dominant discourse associate with mental illness through the diverse types of representation. We picked four TV documentaries from each public channels and analyzed these documentaries using Fairclough's Critical Discourse Analysis. Faircough suggests the analysis frame consisting of three level. The analysis reveals that TV documentaries produce the discourse "the Completion of Distinction" associated with mental illness and individuals with mental illness. TV documentaries suggest the reason why we distinct them from us in textual level. In discourse practice level, they suggest the method and the principal agent of distinction. For social practice, TV documentaries reinforce the dual attitude of viewer. Alternative discourse associated with mental illness and individuals with mental illness will be constructed when individuals with mental illness recovers the status of principal agents and produces strong voices about themselves.

A Study on the Construction of Financial-Specific Language Model Applicable to the Financial Institutions (금융권에 적용 가능한 금융특화언어모델 구축방안에 관한 연구)

  • Jae Kwon Bae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.79-87
    • /
    • 2024
  • Recently, the importance of pre-trained language models (PLM) has been emphasized for natural language processing (NLP) such as text classification, sentiment analysis, and question answering. Korean PLM shows high performance in NLP in general-purpose domains, but is weak in domains such as finance, medicine, and law. The main goal of this study is to propose a language model learning process and method to build a financial-specific language model that shows good performance not only in the financial domain but also in general-purpose domains. The five steps of the financial-specific language model are (1) financial data collection and preprocessing, (2) selection of model architecture such as PLM or foundation model, (3) domain data learning and instruction tuning, (4) model verification and evaluation, and (5) model deployment and utilization. Through this, a method for constructing pre-learning data that takes advantage of the characteristics of the financial domain and an efficient LLM training method, adaptive learning and instruction tuning techniques, were presented.

A Study on Speech Synthesizer Using Distributed System (분산형 시스템을 적용한 음성합성에 관한 연구)

  • Kim, Jin-Woo;Min, So-Yeon;Na, Deok-Su;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.3
    • /
    • pp.209-215
    • /
    • 2010
  • Recently portable terminal is received attention by wireless networks and mass capacity ROM. In this result, TTS(Text to Speech) system is inserted to portable terminal. Nevertheless high quality synthesis is difficult in portable terminal, users need high quality synthesis. In this paper, we proposed Distributed TTS (DTTS) that was composed of server and terminal. The DTTS on corpus based speech synthesis can be high quality synthesis. Synthesis system in server that generate optimized speech concatenation information after database search and transmit terminal. Synthesis system in terminal make high quality speech synthesis as low computation using transmitted speech concatenation information from server. The proposed method that can be reducing complexity, smaller power consumption and efficient maintenance.

A Conceptual Architecture and its Experimental Validation of CCTV-Video Object Activitization for Tangible Assets of Experts' Visual Knowledge in Smart Factories (고숙련자 공장작업지식 자산화를 위한 CCTV-동영상 객체능동화의 개념적 아키텍처와 실험적 검증)

  • Eun-Bi Cho;Dinh-Lam Pham;Kyung-Hee Sun;Kwanghoon Pio Kim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.101-111
    • /
    • 2024
  • In this paper, we propose a concpetual architecture and its implementation approach for contextualizing unstructured CCTV-video frame data into structured XML-video textual data by using the deep-learning neural network models and frameworks. Conclusively, through the conceptual architecture and the implementation approach proposed in this paper, we can eventually realize and implement the so-called sharable working and experiencing knowledge management platforms to be adopted to smart factories in various industries.

BIM-based visualization technology for blasting in Underground Space (지하공간 BIM 기반 발파진동 영향 시각화 기술)

  • Myoung Bae Seo;Soo Mi Choi;Seong Jong Oh;Seong Uk Kim;Jeong Hoon Shin
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.67-76
    • /
    • 2023
  • We propose a visualization method to respond to civil complaints through an analysis of the impact of blasting. In order to analyze the impact of blasting on tunnel excavation, we propose a simulation visualization method considering the mutual influence of the construction infrastructure by linking measurement data and 3D BIM model. First, the level of BIM modeling required for simulation was defined. In addition, vibration measurement data were collected for the GTX-A construction site, terrain and structure BIM were created, and a method for visualizing measurement data using blast vibration estimation was developed. Next, a spherical blasting influence source library was developed for visualization of the blasting influence source, and a specification table that could be linked with Revit Dynamo automation logic was constructed. Using this result, a method for easily visualizing the impact analysis of blasting vibration in 3D was proposed.

Analysis of deep learning-based deep clustering method (딥러닝 기반의 딥 클러스터링 방법에 대한 분석)

  • Hyun Kwon;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.4
    • /
    • pp.61-70
    • /
    • 2023
  • Clustering is an unsupervised learning method that involves grouping data based on features such as distance metrics, using data without known labels or ground truth values. This method has the advantage of being applicable to various types of data, including images, text, and audio, without the need for labeling. Traditional clustering techniques involve applying dimensionality reduction methods or extracting specific features to perform clustering. However, with the advancement of deep learning models, research on deep clustering techniques using techniques such as autoencoders and generative adversarial networks, which represent input data as latent vectors, has emerged. In this study, we propose a deep clustering technique based on deep learning. In this approach, we use an autoencoder to transform the input data into latent vectors, and then construct a vector space according to the cluster structure and perform k-means clustering. We conducted experiments using the MNIST and Fashion-MNIST datasets in the PyTorch machine learning library as the experimental environment. The model used is a convolutional neural network-based autoencoder model. The experimental results show an accuracy of 89.42% for MNIST and 56.64% for Fashion-MNIST when k is set to 10.

Development of a Model for Identifying Drug Organizations and Their Scale through Tweet Clustering

  • Jin-Gyeong Kim;Eun-Young Park;Da–Sol Kim;Cho-Won Kim;Jiyeon Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.10
    • /
    • pp.207-218
    • /
    • 2024
  • In this paper, we propose a model for identifying drug trafficking organizations and assessing their scale by collecting drug promotional tweets from the social media platform 'X,' with a focus on investigating drug crimes that frequently occur among teenagers and young adults. Recently, various cyber crimes, such as drug distribution, illegal gambling, and sex offense, have been on the rise, exploiting the anonymity provided by social media. Drug trafficking organizations, in particular, operate in a decentralized cell structure, where each member receives anonymous instructions regarding only their specific role and is not directly connected to other members. To track these types of crimes, we designed experimental scenarios using various clustering algorithms, such as K-means Clustering and Spectral Clustering, alongside text embedding models like BERT (Bidirectional Encoder Representations from Transformers) and GloVe (Global Vectors for Word Representation). Furthermore, the clustering results derived from each scenario are validated using Jaccard Similarity and a full-scale investigation. We then analyze tweet clusters identified as the same drug organization across all scenarios, prioritizing the identification of high-priority accounts for cyber investigations.

The 'Fantastic' in the René Laloux's movie (<죽은 시간들(Les Temps Morts), 르네 랄루(René Laloux) 작, 1964>의 환상성)

  • Han, Sang-Jung;Park, Sang-Chun
    • Cartoon and Animation Studies
    • /
    • s.27
    • /
    • pp.31-49
    • /
    • 2012
  • This research aims at showing specificity of the 'fantastic' in the movie < Les temps morts (on 1964) > directed by Ren$\acute{e}$ Laloux (1929-2004 ), all over the world recognized director. This movie has a particular style by composing four forms of expression: the real recording (movie), the recording embellishes with images by image (animation), the drawing, the photo. This film is the most strange among his all films. Even if we could catch the key meaning of the film, it offer for the audience the sentiment incertain and unclear. If we consider 'the fantastic' as a hesitation between the real and the unreal, in diverse levels, this movie offers to the spectators the fantastic feelings. In order to present the way this film shows us the fantastic, we divide the film into 15 sequences according to the criteria of the visual elements and the auditive elements. We analyze specificities of this fantastic in diverse levels. At first, the first style of the drawing of Roland Topor, does not let us escape easily from the feeling of fantasy. The four representation formats(drawing, photo, animation, movie) are integrated into one whole by auditive elements(music, narration). On the other hand, certain parts incomprehensible are not integrated into the entire. are fully integrated into the unity that does not understand that part, leaving them can. Laloux leads the audience into a reality toward the end of the film, but he leave incertain sequences at the last moment. Through which the audience is again hesitant between the real and the unreal, the fantastic is strengthened as a result of the work. Finally, the fantastic of the film could be found at three levels. First, the fantastic drawing style of Roland Topor. In the second place, the fantastic exposed through the entire composition and structure of work. Overall, these by leaving through the availability of the story incomprehensible to the audience is to provide a fantastic sentiment.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.