• Title/Summary/Keyword: 공간 텍스트

Search Result 418, Processing Time 0.025 seconds

A Study on the Architectural Environment as a Combination of Performance and Event (퍼포먼스.이벤트의 결합체로서 건축환경연구)

  • 김주미
    • Archives of design research
    • /
    • v.14
    • /
    • pp.121-138
    • /
    • 1996
  • The purpose of this study is to develop a new architectural language and design strategies that would anticipate and incorporate new historical situations and new paradigms to understand the world. It consists of four sections as follows: First, it presents a new interpretation of space, human body, and movement that we find in modern art and tries to combine that new artistic insight with environmental design to provide a theoretical basis for performance-event architecture. Second, it conceives of architectural environment as a combination of space, movement, and probabilistic situations rather than a mere conglomeration of material. It also perceives the environment as a stage for performance and the act of designing as a performance. Third, in this context, man is conceived of as an organic system that responds to, interacts with, and adapts himself to his environment through self-regulation. By the same token, architecture should be a dynamic system that undergoes a constant transformation in its attempt to accommodate human actions and behaviors as he copes with the contemporary philosophy characterized by the principle of uncertainty, fast-changing society, and the new developments in technology. Fourth, the relativistic and organic view-point that constitutes the background for all this is radically different from the causalistic and mechanistic view that characterized the forms and functions of modernistic design. The present study places a great emphases on dematerialistic conception of environment and puts forth a disprogramming method that would accommodate interchangeability in the passage of time and the intertextuality of form and function. In the event, performance-event architecture is a strategy based on the systems world-view that would enable the recovery of man's autonomy and the reconception of his environment as an object of art.

  • PDF

Research of Aesthetic Distance on the Cinematization of Novel (영화 <우리들의 일그러진 영웅>에 나타난 원작소설과의 미적 거리 연구)

  • Kim, Jong-Wan
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.6
    • /
    • pp.151-159
    • /
    • 2012
  • The purpose of this thesis is to figure out the mechanism that how can be shown the aesthetic distances of novel in the film. At discussion of the view point, novel can be told by two factors which are 'who is teller' and 'who is watcher' but in the film, novel's narration is divided into visual point and auditive point. And I will consider the phenomenon on the part of this difference. Next, I will argue about difference between novel and film from the Park Jongwon's aesthetic distances which interpreted Lee Munyeol's work. This thesis is going to observe that how the film adapted three types of view point and how that related the subject of the original novel. For this thesis, I tried to track 'the distances' between figure and identity, and reader and author. Also I did approach that how can be accepted the problem of 'aesthetic distance according to identity' based on this novel in the film and novel's text by reader. This study make a proposal or analysis to the differences between novels and films in terms of narrative point of view. Although it is shown by dividing into each chapter in novel and on connectivity in film, this paper finds out that both film and novel are shown the subject of reader's difference of the view point about 'author and director's identity'.

A Strategy To Reduce Network Traffic Using Two-layered Cache Servers for Continuous Media Data on the Wide Area Network (이중 캐쉬 서버를 사용한 실시간 데이터의 좡대역 네트워크 대역폭 감소 정책)

  • Park, Yong-Woon;Beak, Kun-Hyo;Chung, Ki-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.10
    • /
    • pp.3262-3271
    • /
    • 2000
  • Continuous media objects, due to large volume and real-time consiraints in their delivery,are likely to consume much network andwidth Generally, proxy servers are used to hold the fiequently requested objects so as to reduce the network traffic to the central server but most of them are designed for text and image dae that they do not go well with continuous media data. So, in this paper, we propose a two-layered network cache management policy for continuous media object delivery on the wide area networks. With the proposed cache management scheme,in cach LAN, there exists one LAN cache and each LAN is further devided into a group of sub-LANs, each of which also has its own sub-LAN eache. Further, each object is also partitioned into two parts the front-end and rear-end partition. they can be loaded in the same cache or separately in different network caches according to their access frequencics. By doing so, cache replacement overhead could be educed as compared to the case of the full size daa allocation and replacement , this eventually reduces the backbone network traffic to the origin server.

  • PDF

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

Semantic Access Path Generation in Web Information Management (웹 정보의 관리에 있어서 의미적 접근경로의 형성에 관한 연구)

  • Lee, Wookey
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.2
    • /
    • pp.51-56
    • /
    • 2003
  • The structuring of Web information supports a strong user side viewpoint that a user wants his/her own needs on snooping a specific Web site. Not only the depth first algorithm or the breadth-first algorithm, but also the Web information is abstracted to a hierarchical structure. A prototype system is suggested in order to visualize and to represent a semantic significance. As a motivating example, the Web test site is suggested and analyzed with respect to several keywords. As a future research, the Web site model should be extended to the whole WWW and an accurate assessment function needs to be devised by which several suggested models should be evaluated.

  • PDF

Identifying Landscape Perceptions of Visitors' to the Taean Coast National Park Using Social Media Data - Focused on Kkotji Beach, Sinduri Coastal Sand Dune, and Manlipo Beach - (소셜미디어 데이터를 활용한 태안해안국립공원 방문객의 경관인식 파악 - 꽃지해수욕장·신두리해안사구·만리포해수욕장을 대상으로 -)

  • Lee, Sung-Hee;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.5
    • /
    • pp.10-21
    • /
    • 2018
  • This study used text mining methodology to focus on the perceptions of the landscape embedded in text that users spontaneously uploaded to the "Taean Travel"blogpost. The study area is the Taean Coast National Park. Most of the places that are searched by 'Taean Travel' on the blog were located in the Taean Coast National Park. We conducted a network analysis on the top three places and extracted keywords related to the landscape. Finally, using a centrality and cohesion analysis, we derived landscape perceptions and the major characteristics of those landscapes. As a result of the study, it was possible to identify the main tourist places in Taean, the individual landscape experience, and the landscape perception in specific places. There were three different types of landscape characteristics: atmosphere-related keywords, which appeared in Kkotji Beach, symbolic image-related keywords appeared in Sinduri Coastal Sand Dune, and landscape objects-related appeared in Manlipo Beach. It can be inferred that the characteristics of these three places are perceived differently. Kkotji Beach is recognized as a place to appreciate a view the sunset and is a base for the Taean Coast National Park's trekking course. Sinduri Coastal Sand Dune is recognized as a place with unusual scenery, and is an ecologically valuable space. Finally, Manlipo Beach is adjacent to the Chunlipo Arboretum, which is often visited by tourists, and the beach itself is recognized as a place with an impressive appearance. Social media data is very useful because it can enable analysis of various types of contents that are not from an expert's point of view. In this study, we used social media data to analyze various aspects of how people perceive and enjoy landscapes by integrating various content, such as landscape objects, images, and activities. However, because social media data may be amplified or distorted by users' memories and perceptions, field surveys are needed to verify the results of this study.

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

An Embedded Watermark into Multiple Lower Bitplanes of Digital Image (디지털 영상의 다중 하위 비트플랜에 삽입되는 워터마크)

  • Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.6 s.312
    • /
    • pp.101-109
    • /
    • 2006
  • Recently, according to the number of internet in widely use and the development of the related application program, the distribution and use of multimedia content(text, images, video, audio etc.) is very easy. Digital signal may be easily duplicated and the duplicated data can have same quality of original data so that it is difficult to warrant original owner. For the solution of this problem, the protection method of copyright which is encipher and watermarking. Digital watermarking is used to protect IP(Intellectual Property) and authenticate the owner of multimedia content. In this paper, the proposed watermarking algerian embeds watermark into multiple lower bitplanes of digital image. In the proposed algorithm, original and watermark images are decomposed to bitplanes each other and the watermarking operation is executed in the corresponded bitplane. The position of watermark image embedded in each bitplane is used to the watermarking key and executed in multiple lower bitplane which has no an influence on human visual recognition. Thus this algorithm can present watermark image to the multiple inherent patterns and needs small watermarking quantity. In the experiment, the author confirmed that it has high robustness against attacks of JPEG, MEDIAN and PSNR but it is weakness against attacks of NOISE, RNDDIST, ROT, SCALE, SS on spatial domain when a criterion PSNR of watermarked image is 40dB.

The Effectiveness of Explicit Form-Focused Instruction in Teaching the Schwa /ə/ (영어 약모음 /ə/ 교수에 있어서 명시적 Form-Focused Instruction의 효과 연구)

  • Lee, Yunhyun
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.8
    • /
    • pp.101-113
    • /
    • 2020
  • This study aimed to explore how effective explicit form-focused instruction (FFI) is in teaching the schwa vowel /ə/ to EFL students in a classroom setting. The participants were 25 female high school students, who were divided into the experimental group (n=13) and the control group (n=12). One female American also participated in the study for a speech sample as a reference. The treatment, which involves shadowing model pronunciation by the researcher and a free text-to-speech software and the researcher's feedback in a private session, was given to the control group over a month and a half. The speech samples, for which the participants read the 14 polysyllabic stimulus words followed by the sentences containing the words, were collected before and after the treatment. The paired-samples t test and non-parametric Wilcoxon signed-rank test were used for analysis. The results showed that the participants of the experimental group in the post-test reduced the duration of the schwa by around 40 percent compared to the pre-test. However, little effect was found in approximating the participants' distribution patterns of /ə/ measured by the F1/F2 formant frequencies to the reference point, which was 539 Hz (F1) by 1797 Hz (F2). The findings of this study suggest that explicit FFI with multiple repetitions and corrective feedback is partly effective in teaching pronunciation.