• Title/Summary/Keyword: Implicit methods

Search Result 287, Processing Time 0.026 seconds

Modeling of flat otter boards motion in three dimensional space (평판형 전개판의 3차원 운동 모델링)

  • Choe, Moo-Youl;Lee, Chun-Woo;Lee, Gun-Ho
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.43 no.1
    • /
    • pp.49-61
    • /
    • 2007
  • Otter boards in the trawl are the one of essential equipments for the net mouth to be spread to the horizontal direction. Its performance should be considered in the light of the spreading force to the drag and the stability of towing in the water. Up to the present, studies of the otter boards have focused mainly on the drag and lift force, but not on the stability of otter boards movement in 3 dimensional space. In this study, the otter board is regarded as a rigid body, which has six degrees of freedom motion in three dimensional coordinate system. The forces acting on the otter boards are the underwater weight, the resistance of drag and spread forces and the tension on the warps and otter pendants. The equations of forces were derived and substituted into the governing equations of 6 degrees of freedom motion, then the second order of differential equations to the otter boards were established. For the stable numerical integration of this system, Backward Euler one of implicit methods was used. From the results of the numerical calculation, graphic simulation was carried out. The simulations were conducted for 3 types of otter boards having same area with different aspect ratio(${\lambda}=0.5,\;1.0,\;1.5$). The tested gear was mid-water trawl and the towing speed was 4k't. The length of warp was 350m and all conditions were same to each otter board. The results of this study are like this; First, the otter boards of ${\lambda}=1.0$ showed the longest spread distance, and the ${\lambda}=0.5$ showed the shorted spread distance. Second, the otter boards of ${\lambda}=1.0$ and 1.5 showed the upright at the towing speed of 4k't, but the one of ${\lambda}=0.5$ heeled outside. Third, the yawing angles of three otter boards were similar after 100 seconds with the small oscillation. Fourth, it was revealed that the net height and width are affected by the characteristics of otter boards such as the lift coefficient.

Progressive occupancy network for 3D reconstruction (3차원 형상 복원을 위한 점진적 점유 예측 네트워크)

  • Kim, Yonggyu;Kim, Duksu
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.3
    • /
    • pp.65-74
    • /
    • 2021
  • 3D reconstruction means that reconstructing the 3D shape of the object in an image and a video. We proposed a progressive occupancy network architecture that can recover not only the overall shape of the object but also the local details. Unlike the original occupancy network, which uses a feature vector embedding information of the whole image, we extract and utilize the different levels of image features depending on the receptive field size. We also propose a novel network architecture that applies the image features sequentially to the decoder blocks in the decoder and improves the quality of the reconstructed 3D shape progressively. In addition, we design a novel decoder block structure that combines the different levels of image features properly and uses them for updating the input point feature. We trained our progressive occupancy network with ShapeNet. We compare its representation power with two prior methods, including prior occupancy network(ONet) and the recent work(DISN) that used different levels of image features like ours. From the perspective of evaluation metrics, our network shows better performance than ONet for all the metrics, and it achieved a little better or a compatible score with DISN. For visualization results, we found that our method successfully reconstructs the local details that ONet misses. Also, compare with DISN that fails to reconstruct the thin parts or occluded parts of the object, our progressive occupancy network successfully catches the parts. These results validate the usefulness of the proposed network architecture.

Role of Social Care Services after the Unification: 'TAIDA' Scenario Analysis (통일 이후 돌봄서비스의 사회통합 역할에 관한 연구: 미래 시나리오 분석)

  • Choi, Young Jun;Hwang, Gyu Seong;Choi, Hye Jin
    • 한국사회정책
    • /
    • v.23 no.1
    • /
    • pp.61-93
    • /
    • 2016
  • This research aims to analyze the role of social care services after the unification, assuming that the unification would occur in 2020 in a peaceful manner. While much has been discussed about the unification in recent years inside or outside academia, most of discussion tends to focus on political and economic dimensions. Also, social policy studies on North Korean defectors have increased, but few pay attention to social policy strategies after the possible unification. In this context, this study explores various explicit and implicit roles of social care services and possible strategies after the unification. As research methodology, it employs one of the scenario methods, 'TAIDA', for projecting and simulating uncertain future. In so doing, first, it reviews South and North Korean socio-economic experiences during last two decades as feedback and German unification experiences as feedforward. In addition, it utilizes a expert survey. Based on the reviews together with the survey result, it discusses various influences of social care services after the unification and draws policy implications. This research aruges that social care services could have profound effects on the stability of socio-economic conditions after the unification.

Dispute of Part-Whole Representation in Conceptual Modeling (부분-전체 관계에 관한 개념적 모델링의 논의에 관하여)

  • Kim, Taekyung;Park, Jinsoo;Rho, Sangkyu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.97-116
    • /
    • 2012
  • Conceptual modeling is an important step for successful system development. It helps system designers and business practitioners share the same view on domain knowledge. If the work is successful, a result of conceptual modeling can be beneficial in increasing productivity and reducing failures. However, the value of conceptual modeling is unlikely to be evaluated uniformly because we are lack of agreement on how to elicit concepts and how to represent those with conceptual modeling constructs. Especially, designing relationships between components, also known as part-whole relationships, have been regarded as complicated work. The recent study, "Representing Part-Whole Relations in Conceptual Modeling : An Empirical Evaluation" (Shanks et al., 2008), published in MIS Quarterly, can be regarded as one of positive efforts. Not only the study is one of few attempts of trying to clarify how to select modeling alternatives in part-whole design, but also it shows results based on an empirical experiment. Shanks et al. argue that there are two modeling alternatives to represent part-whole relationships : an implicit representation and an explicit one. By conducting an experiment, they insist that the explicit representation increases the value of a conceptual model. Moreover, Shanks et al. justify their findings by citing the BWW ontology. Recently, the study from Shanks et al. faces criticism. Allen and March (2012) argue that Shanks et al.'s experiment is lack of validity and reliability since the experimental setting suffers from error-prone and self-defensive design. They point out that the experiment is intentionally fabricated to support the idea, as such that using concrete UML concepts results in positive results in understanding models. Additionally, Allen and March add that the experiment failed to consider boundary conditions; thus reducing credibility. Shanks and Weber (2012) contradict flatly the argument suggested by Allen and March (2012). To defend, they posit the BWW ontology is righteously applied in supporting the research. Moreover, the experiment, they insist, can be fairly acceptable. Therefore, Shanks and Weber argue that Allen and March distort the true value of Shanks et al. by pointing out minor limitations. In this study, we try to investigate the dispute around Shanks et al. in order to answer to the following question : "What is the proper value of the study conducted by Shanks et al.?" More profoundly, we question whether or not using the BWW ontology can be the only viable option of exploring better conceptual modeling methods and procedures. To understand key issues around the dispute, first we reviewed previous studies relating to the BWW ontology. We critically reviewed both of Shanks and Weber and Allen and March. With those findings, we further discuss theories on part-whole (or part-of) relationships that are rarely treated in the dispute. As a result, we found three additional evidences that are not sufficiently covered by the dispute. The main focus of the dispute is on the errors of experimental methods: Shanks et al. did not use Bunge's Ontology properly; the refutation of a paradigm shift is lack of concrete, logical rationale; the conceptualization on part-whole relations should be reformed. Conclusively, Allen and March indicate properly issues that weaken the value of Shanks et al. In general, their criticism is reasonable; however, they do not provide sufficient answers how to anchor future studies on part-whole relationships. We argue that the use of the BWW ontology should be rigorously evaluated by its original philosophical rationales surrounding part-whole existence. Moreover, conceptual modeling on the part-whole phenomena should be investigated with more plentiful lens of alternative theories. The criticism on Shanks et al. should not be regarded as a contradiction on evaluating modeling methods of alternative part-whole representations. To the contrary, it should be viewed as a call for research on usable and useful approaches to increase value of conceptual modeling.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

Comparative Study about Academic Thoughts of Xu Lingtai and Yoshimasu Todo (I) - Focus on their Major Books - (서영태(徐靈胎)와 길익동동(吉益東洞)의 학술사상 비교 연구 (I) - 각자의 주요 저서를 중심으로 -)

  • Yoon, Cheol-Ho;Huang, Huang
    • The Journal of Internal Korean Medicine
    • /
    • v.31 no.4
    • /
    • pp.792-812
    • /
    • 2010
  • In the 18th century, Xu Lingtai (徐靈胎) and Yoshimasu Todo (吉益東洞) were famous doctors advocating ancient medicine, though they lived in different countries, China and Japan. We compared their major books, analyzed their academic thoughts and then took conclusions as below. 1. The first, for instance "Classified Prescriptions of Treatise on Cold Damage Diseases, 傷寒論類方" and "Classified Assemblage of Prescriptions, 類聚方". Based on essential thought that a prescription and a syndrome should correspond, these books arranged and classified the Zhang Zhongjing (張仲景)'s texts."Classified Prescriptions of Treatise on Cold Damage Diseases", based on the thought that principles, methods, formulas and medicinals (理法方藥) were integrated in prescriptions, tried to find out the implicit treatment rules in prescriptions and syndromes through analyzing "Treatise on Cold Damage Diseases, 傷寒論". On the other hand, because Classified Assemblage of Prescriptions focused on the syndromes of ancient prescriptions (古方), it classified and collected the related texts of Treatise on Cold Damage Diseases and "Synopsis of Prescriptions of the Golden Chamber, 금궤요략", and then suggested only simple instructions on how to prescribe medicine. So in this book, the trend of experience was clear. 2. The second, there is "100 Kinds Records from Shennong's Classic of Materia Medica, 神農本草經百種錄" and "Description work of herbal pharmacology comprised of excerpts from Shanhanlun and medical experiences, 藥徵". Though both of these books are professional oriental pharmacology publications that advocate reactionism, there were remarkable differences in writing style between them. "Description work of herbal pharmacology comprised of excerpts from Shanhanlun and medical experiences" was based on "Treat on Cold Damage Diseases" and "Synopsis of Prescriptions of the Golden Chamber", just explained the effects of medications and discussed 'matter of course (所當然)', but not discussed 'the reason why (所以然)'. In explaining style of syndromes, it confirmed through research, and emphasized the inductive method. On the other hand, "100 Kinds Records from Shennong's Classic of Materia Medica based on "Shennong's Classic of Materia Medica, 神農本草經", explained the nature of medications and discussed 'the reason why (所以然)'. In explaining style of syndromes, it annotated and explained, and emphasized the process of reasoning. 3. The third, there is "Discuss the Headwaters of Medicine, 醫學源流論" and Severance of Medical evils, 醫斷". Aiming the then medical theories fallen in confused state, these books brought order out of chaos, clarified the categories of medical research, and emphasized the scientific method that could put theories into practice and verify them. The difference is that "Severance of Medical Evils" researched only macroscopic viewable clinical phenomena, and even denied the existence of names of diseases and etiological causes. Thus, it emphasized the accumulation of experiences, laid emphasis on "watching and realizing (目認)", and "understand and taking in (解悟)". Discuss the Headwaters of Medicine extremely emphasized the research of 'something not occuring (未然)', that is to say, induced notions of a disease from observing clinical phenomena, furthermore based on these, predicted the 'something not occuring (未然)' and emphasized researching 'the reason why (所以然)'. As regards how they deal with the traditional theories and post-Zhang Zhongjing's medicines, "Severance of Medical evils" took completely denying attitudes. In case of "Discuss the Headwaters of Medicine", it could be used reasonably through specific situation and detailed analysis. Collectively speaking, there were some differences between medical theories of Xu Lingtai and Yoshimasu Todo. Actually, these differences were whether he tried to research the essence of disease, whether he tried to consider it rationally, and how he treated various opinions occurring in the theories of traditional medicine and clinical experience.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.