• Title/Summary/Keyword: representation system

Search Result 1,649, Processing Time 0.026 seconds

A Study on Practices and Improvement Factors of Financial Disclosures in early stages of IFRS Adoption - An Integrative Approach of Korean Cases: Embracing Views of Reporting Entities and Users of Financial Statements (IFRS 공시 실태 개선방안에 대한 소고 - 보고기업, 정보이용자 요인을 고려한 통합적 접근 -)

  • Kim, Hee-Suk
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.7 no.2
    • /
    • pp.113-127
    • /
    • 2012
  • From the end of 1st quarter of 2012, Korean mandatory firms had started releasing financial reports conforming to the K-IFRS(Korean adopted International Financial Reporting Standards). Major characteristics of IFRS, such as 'principles based' features, consolidated reporting, 'fair value' measurement, increased pressure for non-financial disclosures have resulted in brief and various disclosure practices regarding the main body of each statements and vast amount of note description requirements. Meanwhile, a host of previous studies on IFRS disclosures have incorporated regulatory and/or 'compete information' perspectives, mainly focusing on suggesting further enforcement of strengthened requirements and providing guidelines for specific treatments. Thus, as an extension of prior findings and suggestions this study had explored to conduct an integrative approach embracing views of the reporting entities and the users of financial information. In spite of all the state-driven efforts for faithful representation and comparability of corporate financial reports, an overhaul of disclosure practices of fiscal year 2010 and 2011 had revealed numerous cases of insufficiency and discordance in terms of mandatory norms and market expectations. As to the causes of such shortcomings, this study identified several factors from the corporate side and the users of the information; some inherent aspects of IFRS, industry/corporate-specific context, expenditures related to internalizing IFRS system, reduced time frame for presentation. lack of clarity and details to meet the quality of information - understandability, comparability etc. - commonly requested by the user group. In order to improve current disclosure practices, dual approach had been suggested; Firstly, to encourage and facilitate implementation, (1) further segmentation and differentiation of mandates among companies, (2) redefining the scope and depth of note descriptions, (3) diversification and coordination of reporting periods, (4) providing support for equipping disclosure systems and granting incentives for best practices had been discussed. Secondly, as for the hard measures, (5) regularizing active involvement of corporate and user group delegations in the establishment and amendment process of K-IFRS (6) enforcing detailed and standardized disclosure on reporting entities had been recommended.

  • PDF

Semantic Access Path Generation in Web Information Management (웹 정보의 관리에 있어서 의미적 접근경로의 형성에 관한 연구)

  • Lee, Wookey
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.2
    • /
    • pp.51-56
    • /
    • 2003
  • The structuring of Web information supports a strong user side viewpoint that a user wants his/her own needs on snooping a specific Web site. Not only the depth first algorithm or the breadth-first algorithm, but also the Web information is abstracted to a hierarchical structure. A prototype system is suggested in order to visualize and to represent a semantic significance. As a motivating example, the Web test site is suggested and analyzed with respect to several keywords. As a future research, the Web site model should be extended to the whole WWW and an accurate assessment function needs to be devised by which several suggested models should be evaluated.

  • PDF

A Study on Repetition and Multiplicite of Superhero Comics (슈퍼히어로 코믹의 반복과 다양체적 형식에 관한 연구)

  • Park, Seh-Hyuck
    • Cartoon and Animation Studies
    • /
    • s.28
    • /
    • pp.27-53
    • /
    • 2012
  • American superhero comics are being produced in numerous different forms for multiple, and cross-platform media. In cinema, films based on superhero comics top the list of all time box office records. The same phenomenon of the 'Invasion of Superhero' is duplicated in Korean box office 2012, however, the korean fans are not familiar with the superheroes from the publication which provides the original source of characters and stories for other superhero related media products. Based on observing the multiple and vastness of world building characteristics of superhero comic, this paper attempts to associate the continuous creative nature and the infinite repetition of superheroes and comic texts with the identity of superhero on ontological level. First, chapter 2 examines how one superhero exists in multiple and different worlds individually by utilizing the concept of 'multi-universe' or 'multiverse' in comic texts. Initially, duplicating a superhero on multiple settings and series destroyed continuity and allowed contradiction and paradox confused narrative as a whole, but it also gave chances for comics to be more vibrant and experimental with their stories and characters. Chapter 3 analyzes the superhero comic texts in the light of repetition, concept developed by french philosopher Gilles Deleuze, and make the argument that the superheroes and texts are not repeated to generate surplus of source for economic utilitarian purposes, but it is, first and foremost, a repetition of creativeness and capability. Many concepts introduced by Deleuze in his early masterpiece, $\acute{e}$rence et R$\acute{e}$p$\acute{e}$tition> are taken to support this argument. Mainly, his critical views on generality of the identity and his effort to replace the Plato's system of representation with vibrant creative, and renewal energy of r$\acute{e}$p$\acute{e}$tition. According to Deleuze, repetition is similar concept, if not identical, to what Nietzsche called the 'eternal return' which allows the return of the Overhuman or the Superhuman, and he extends his idea that the returning Overhuman is the singular simulacre which opposes the generalization of identity, in the likes of Plato's Idea. Thus, the superhero's identity is ever changing, ever returning, and ever renewing Overhuman. The superhero must be repeated to fully actualize his/her existence. Also, based on Deleuze's reading of Bergson's texts on the Virtual, the superhero, for example Superman, is actualization of his/her multiplisit$\acute{e}$, the internally multiple, and differentiated variations from itself. These every Supermans' multiplisit$\acute{e}$ shares common memory, past, and duration, thus the Virtual of Superman. Superman becomes himself only by actualizing the movement and differentiation from these multiplisit$\acute{e}$ in his virtual on the surface of reality. On chapter 4, a popular Korean comic book character Oh, Hae-Sung's r$\acute{e}$p$\acute{e}$tition and multiplist$\acute{e}$ are analyzed, and makes comparison to that of Superman to distinguish the repetition from r$\acute{e}$p$\acute{e}$tition, and multiplicit$\acute{e}$ from diversity.

A Systematic Review of Developmental Coordination Disorders in South Korea: Evaluation and Intervention (국내의 발달성협응장애(DCD) 연구에 관한 체계적 고찰 : 평가와 중재접근 중심으로)

  • Kim, Min Joo;Choi, Jeong-Sil
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.19 no.1
    • /
    • pp.69-82
    • /
    • 2021
  • Objective : This recent work intended to provide basic information for researchers and practitioners related to occupational therapy about Developmental Coordination Disorder (DCD) in South Korea. The previous research of screening DCD and the effects of intervention programs were reviewed. Methods : Peer-reviewed papers relating to DCD and published in Korea from January 1990 to December 2020 were systematically reviewed. The search terms "developmental coordination disorder," "development coordination," and "developmental coordination" were used to identify previous Korean research in this area from three representation database, the Research Information Sharing Service, Korean Studies Information Service System, and Google Scholar. We found a total of 4,878 articles identified through the three search engines and selected seventeen articles for analysis after removing those that corresponded to the overlapping or exclusion criteria. We adopted "the conceptual model" to analyze the selected articles about DCD assessment and intervention. Results : We found that twelve of the 17 studies showed the qualitative level of Level 2 using non-randomized approach between the two groups. The Movement Assessment Battery for Children and its second edition were the most frequently used tools in assessing children for DCD. Among the intervention studies, the eight articles (47%) were adopted a dynamic systems approach; a normative functional skill framework and cognitive neuroscience were each used in 18% of the pieces; and 11% of the articles were applied neurodevelopmental theory. Only one article was used a combination approach of normative functional skill and general abilities. These papers were mainly focused on the movement characteristics of children with DCD and the intervention effect of exercise or sports programs. Conclusion : Most of the reviewed studies investigated the movement characteristics of DCD or explore the effectiveness of particular intervention programs. In the future, it would be useful to investigate the feasibility of different assessment tools and to establish the effectiveness of various interventions used in rehabilitation for better motor performance in children with DCD.

Open Digital Textbook for Smart Education (스마트교육을 위한 오픈 디지털교과서)

  • Koo, Young-Il;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.177-189
    • /
    • 2013
  • In Smart Education, the roles of digital textbook is very important as face-to-face media to learners. The standardization of digital textbook will promote the industrialization of digital textbook for contents providers and distributers as well as learner and instructors. In this study, the following three objectives-oriented digital textbooks are looking for ways to standardize. (1) digital textbooks should undertake the role of the media for blended learning which supports on-off classes, should be operating on common EPUB viewer without special dedicated viewer, should utilize the existing framework of the e-learning learning contents and learning management. The reason to consider the EPUB as the standard for digital textbooks is that digital textbooks don't need to specify antoher standard for the form of books, and can take advantage od industrial base with EPUB standards-rich content and distribution structure (2) digital textbooks should provide a low-cost open market service that are currently available as the standard open software (3) To provide appropriate learning feedback information to students, digital textbooks should provide a foundation which accumulates and manages all the learning activity information according to standard infrastructure for educational Big Data processing. In this study, the digital textbook in a smart education environment was referred to open digital textbook. The components of open digital textbooks service framework are (1) digital textbook terminals such as smart pad, smart TVs, smart phones, PC, etc., (2) digital textbooks platform to show and perform digital contents on digital textbook terminals, (3) learning contents repository, which exist on the cloud, maintains accredited learning, (4) App Store providing and distributing secondary learning contents and learning tools by learning contents developing companies, and (5) LMS as a learning support/management tool which on-site class teacher use for creating classroom instruction materials. In addition, locating all of the hardware and software implement a smart education service within the cloud must have take advantage of the cloud computing for efficient management and reducing expense. The open digital textbooks of smart education is consdered as providing e-book style interface of LMS to learners. In open digital textbooks, the representation of text, image, audio, video, equations, etc. is basic function. But painting, writing, problem solving, etc are beyond the capabilities of a simple e-book. The Communication of teacher-to-student, learner-to-learnert, tems-to-team is required by using the open digital textbook. To represent student demographics, portfolio information, and class information, the standard used in e-learning is desirable. To process learner tracking information about the activities of the learner for LMS(Learning Management System), open digital textbook must have the recording function and the commnincating function with LMS. DRM is a function for protecting various copyright. Currently DRMs of e-boook are controlled by the corresponding book viewer. If open digital textbook admitt DRM that is used in a variety of different DRM standards of various e-book viewer, the implementation of redundant features can be avoided. Security/privacy functions are required to protect information about the study or instruction from a third party UDL (Universal Design for Learning) is learning support function for those with disabilities have difficulty in learning courses. The open digital textbook, which is based on E-book standard EPUB 3.0, must (1) record the learning activity log information, and (2) communicate with the server to support the learning activity. While the recording function and the communication function, which is not determined on current standards, is implemented as a JavaScript and is utilized in the current EPUB 3.0 viewer, ths strategy of proposing such recording and communication functions as the next generation of e-book standard, or special standard (EPUB 3.0 for education) is needed. Future research in this study will implement open source program with the proposed open digital textbook standard and present a new educational services including Big Data analysis.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

An Analytical Approach Using Topic Mining for Improving the Service Quality of Hotels (호텔 산업의 서비스 품질 향상을 위한 토픽 마이닝 기반 분석 방법)

  • Moon, Hyun Sil;Sung, David;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.21-41
    • /
    • 2019
  • Thanks to the rapid development of information technologies, the data available on Internet have grown rapidly. In this era of big data, many studies have attempted to offer insights and express the effects of data analysis. In the tourism and hospitality industry, many firms and studies in the era of big data have paid attention to online reviews on social media because of their large influence over customers. As tourism is an information-intensive industry, the effect of these information networks on social media platforms is more remarkable compared to any other types of media. However, there are some limitations to the improvements in service quality that can be made based on opinions on social media platforms. Users on social media platforms represent their opinions as text, images, and so on. Raw data sets from these reviews are unstructured. Moreover, these data sets are too big to extract new information and hidden knowledge by human competences. To use them for business intelligence and analytics applications, proper big data techniques like Natural Language Processing and data mining techniques are needed. This study suggests an analytical approach to directly yield insights from these reviews to improve the service quality of hotels. Our proposed approach consists of topic mining to extract topics contained in the reviews and the decision tree modeling to explain the relationship between topics and ratings. Topic mining refers to a method for finding a group of words from a collection of documents that represents a document. Among several topic mining methods, we adopted the Latent Dirichlet Allocation algorithm, which is considered as the most universal algorithm. However, LDA is not enough to find insights that can improve service quality because it cannot find the relationship between topics and ratings. To overcome this limitation, we also use the Classification and Regression Tree method, which is a kind of decision tree technique. Through the CART method, we can find what topics are related to positive or negative ratings of a hotel and visualize the results. Therefore, this study aims to investigate the representation of an analytical approach for the improvement of hotel service quality from unstructured review data sets. Through experiments for four hotels in Hong Kong, we can find the strengths and weaknesses of services for each hotel and suggest improvements to aid in customer satisfaction. Especially from positive reviews, we find what these hotels should maintain for service quality. For example, compared with the other hotels, a hotel has a good location and room condition which are extracted from positive reviews for it. In contrast, we also find what they should modify in their services from negative reviews. For example, a hotel should improve room condition related to soundproof. These results mean that our approach is useful in finding some insights for the service quality of hotels. That is, from the enormous size of review data, our approach can provide practical suggestions for hotel managers to improve their service quality. In the past, studies for improving service quality relied on surveys or interviews of customers. However, these methods are often costly and time consuming and the results may be biased by biased sampling or untrustworthy answers. The proposed approach directly obtains honest feedback from customers' online reviews and draws some insights through a type of big data analysis. So it will be a more useful tool to overcome the limitations of surveys or interviews. Moreover, our approach easily obtains the service quality information of other hotels or services in the tourism industry because it needs only open online reviews and ratings as input data. Furthermore, the performance of our approach will be better if other structured and unstructured data sources are added.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.