• Title/Summary/Keyword: Main field

Search Result 4,541, Processing Time 0.044 seconds

Study on 3D Printer Suitable for Character Merchandise Production Training (캐릭터 상품 제작 교육에 적합한 3D프린터 연구)

  • Kwon, Dong-Hyun
    • Cartoon and Animation Studies
    • /
    • s.41
    • /
    • pp.455-486
    • /
    • 2015
  • The 3D printing technology, which started from the patent registration in 1986, was a technology that did not attract attention other than from some companies, due to the lack of awareness at the time. However, today, as expiring patents are appearing after the passage of 20 years, the price of 3D printers have decreased to the level of allowing purchase by individuals and the technology is attracting attention from industries, in addition to the general public, such as by naturally accepting 3D and to share 3D data, based on the generalization of online information exchange and improvement of computer performance. The production capability of 3D printers, which is based on digital data enabling digital transmission and revision and supplementation or production manufacturing not requiring molding, may provide a groundbreaking change to the process of manufacturing, and may attain the same effect in the character merchandise sector. Using a 3D printer is becoming a necessity in various figure merchandise productions which are in the forefront of the kidult culture that is recently gaining attention, and when predicting the demand by the industrial sites related to such character merchandise and when considering the more inexpensive price due to the expiration of patents and sharing of technology, expanding opportunities and sectors of employment and cultivating manpower that are able to engage in further creative work seems as a must, by introducing education courses cultivating manpower that can utilize 3D printers at the education field. However, there are limits in the information that can be obtained when seeking to introduce 3D printers in school education. Because the press or information media only mentions general information, such as the growth of the industrial size or prosperous future value of 3D printers, the research level of the academic world also remains at the level of organizing contents in an introductory level, such as by analyzing data on industrial size, analyzing the applicable scope in the industry, or introducing the printing technology. Such lack of information gives rise to problems at the education site. There would be no choice but to incur temporal and opportunity expenses, since the technology would only be able to be used after going through trials and errors, by first introducing the technology without examining the actual information, such as through comparing the strengths and weaknesses. In particular, if an expensive equipment introduced does not suit the features of school education, the loss costs would be significant. This research targeted general users without a technology-related basis, instead of specialists. By comparing the strengths and weaknesses and analyzing the problems and matters requiring notice upon use, pursuant to the representative technologies, instead of merely introducing the 3D printer technology as had been done previously, this research sought to explain the types of features that a 3D printer should have, in particular, when required in education relating to the development of figure merchandise as an optional cultural contents at cartoon-related departments, and sought to provide information that can be of practical help when seeking to provide education using 3D printers in the future. In the main body, the technologies were explained by making a classification based on a new perspective, such as the buttress method, types of materials, two-dimensional printing method, and three-dimensional printing method. The reason for selecting such different classification method was to easily allow mutual comparison of the practical problems upon use. In conclusion, the most suitable 3D printer was selected as the printer in the FDM method, which is comparatively cheap and requires low repair and maintenance cost and low materials expenses, although rather insufficient in the quality of outputs, and a recommendation was made, in addition, to select an entity that is supportive in providing technical support.

A Study on the Determinants of Patent Citation Relationships among Companies : MR-QAP Analysis (기업 간 특허인용 관계 결정요인에 관한 연구 : MR-QAP분석)

  • Park, Jun Hyung;Kwahk, Kee-Young;Han, Heejun;Kim, Yunjeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.21-37
    • /
    • 2013
  • Recently, as the advent of the knowledge-based society, there are more people getting interested in the intellectual property. Especially, the ICT companies leading the high-tech industry are working hard to strive for systematic management of intellectual property. As we know, the patent information represents the intellectual capital of the company. Also now the quantitative analysis on the continuously accumulated patent information becomes possible. The analysis at various levels becomes also possible by utilizing the patent information, ranging from the patent level to the enterprise level, industrial level and country level. Through the patent information, we can identify the technology status and analyze the impact of the performance. We are also able to find out the flow of the knowledge through the network analysis. By that, we can not only identify the changes in technology, but also predict the direction of the future research. In the field using the network analysis there are two important analyses which utilize the patent citation information; citation indicator analysis utilizing the frequency of the citation and network analysis based on the citation relationships. Furthermore, this study analyzes whether there are any impacts between the size of the company and patent citation relationships. 74 S&P 500 registered companies that provide IT and communication services are selected for this study. In order to determine the relationship of patent citation between the companies, the patent citation in 2009 and 2010 is collected and sociomatrices which show the patent citation relationship between the companies are created. In addition, the companies' total assets are collected as an index of company size. The distance between companies is defined as the absolute value of the difference between the total assets. And simple differences are considered to be described as the hierarchy of the company. The QAP Correlation analysis and MR-QAP analysis is carried out by using the distance and hierarchy between companies, and also the sociomatrices that shows the patent citation in 2009 and 2010. Through the result of QAP Correlation analysis, the patent citation relationship between companies in the 2009's company's patent citation network and the 2010's company's patent citation network shows the highest correlation. In addition, positive correlation is shown in the patent citation relationships between companies and the distance between companies. This is because the patent citation relationship is increased when there is a difference of size between companies. Not only that, negative correlation is found through the analysis using the patent citation relationship between companies and the hierarchy between companies. Relatively it is indicated that there is a high evaluation about the patent of the higher tier companies influenced toward the lower tier companies. MR-QAP analysis is carried out as follow. The sociomatrix that is generated by using the year 2010 patent citation relationship is used as the dependent variable. Additionally the 2009's company's patent citation network and the distance and hierarchy networks between the companies are used as the independent variables. This study performed MR-QAP analysis to find the main factors influencing the patent citation relationship between the companies in 2010. The analysis results show that all independent variables have positively influenced the 2010's patent citation relationship between the companies. In particular, the 2009's patent citation relationship between the companies has the most significant impact on the 2010's, which means that there is consecutiveness regarding the patent citation relationships. Through the result of QAP correlation analysis and MR-QAP analysis, the patent citation relationship between companies is affected by the size of the companies. But the most significant impact is the patent citation relationships that had been done in the past. The reason why we need to maintain the patent citation relationship between companies is it might be important in the use of strategic aspect of the companies to look into relationships to share intellectual property between each other, also seen as an important auxiliary of the partner companies to cooperate with.

Studies on the Estimation of K2O Requirement for rice through the Chemical Test Data of Paddy Top Soil (화학분석(化學分析)을 통(通)한 수도(水稻)의 가리적량(加里適量) 추정(推定)에 관한 연구(硏究))

  • Kim, Moon Kyu
    • Korean Journal of Agricultural Science
    • /
    • v.2 no.1
    • /
    • pp.61-100
    • /
    • 1975
  • This study has been made to find out the possibilty of successfully using the following $K_2O$ recommended equation $K_2O\;kg/10a=(Ko/\sqrt{Ca+Mg}-Ks/\sqrt{Ca+Mg})sqrt{Ca+Mg}.\;47.\;B\;D$. where $Ko/sqrt{Ca+Mg}=0.03518+0.0007658\;Sio_2/O.M$. $K_Ssqrt{Ca+Mg}$=Exchangeable K me/100g/$\sqrt{Total\;soluble(Ca+Mg)me/100g\;in\;Soil}$ B. D. =Bulk density of top soil, when the dose of Nitrogen for rice is estimated from the following equation: $N\;kg/10a=(4.2+0.096\;SiO_2/O.M).F$ where $F=0.907+0.263x-0.013x^2$ $SiO_2/O.M=(available\;SiO_2=ppm)/(organic\;matter\;%)$in soil For this. two field experiments. one in sandy and the other in clay paddy soil. have been conducted using 3 levels of wollastonite (0, 500, 100kg/10a) as main treatments; 3 levels of $K_2O$ application were used as sub-plots. These were as follows : (1) 8kg of $K_2O$/10a regardless of the K activity-$K/\sqrt{Ca+Mg}$; (2) kg of $K_2O$/10a estimated from the above equation. and (3) same as (2) above plus additional 30% of $K_2O$. The dose of N kg/ 10a was determined from the above equation based on the value of $SiO_2$/O.M. ratio in each treatment. There were three replications. The leading variety of rice in Chung Chong Nam Do area. Akibare (introduced from Japan) was used. The data obtained. through soil and plant analysis and growth and yield observations. have been throughly examined to attain the following summarized conclusions. 1. The nitrogen dose. estimated from the above equation. was in excess for optimum growth of the rice variety Akibare; indicating the necessity of modification onthe value of "F" or the constants in the equation. The concept of using $SiO_2$/O.M. in the equation was shown to be applicable. 2. The dose of potash. estimated from the respective equation given above. also was in excess of the rice requirements indicating the necessity of minor change in the estimation of $Ko/\sqrt{Ca+Mg}$ value and some great modification in the calculation of $Ks/\sqrt{Ca+Mg}$ value for the equation; however the concept of using $K/\sqrt{Ca+Mg}$ as a basis of $K_2O$ recommendation was shown to be quite reasonable. 3. It was found. from the correlation study using the data of paddy yield and amount of $K_2O$ absorbed by rice plants that the substitution of the value of $Ks/\sqrt{Ca+Mg}$ in the equation for the vaule $Ks/\sqrt{Ca+Mg}=0.037+0.78K\;me/100g$ soil was much more applicable than using the value calculated from the data of soil and wollastonite analysis.

  • PDF

The Study on the Debris Slope Landform in the Southern Taebaek Mountains (태백산맥 남부산지의 암설사면지형)

  • Jeon, Young-Gweon
    • Journal of the Korean Geographical Society
    • /
    • v.28 no.2
    • /
    • pp.77-98
    • /
    • 1993
  • The intent of this study is to analyze the characteristics of distribution, patter, and deposits of the exposed debris slope landform by aerial photography interpretation, measure-ment on the topographical maps and field surveys in the southern part Taebaek mountains. It also aims to research the arrangement types of mountain slope and the landform development of debris slopes in this area. In conclusion, main observations can be summed up as follows. 1. The distribution characteristics 1)From the viewpoint of bedrocks, the distribution density of talus is high in case of the bedrock with high density of joints, sheeting structures and hard rocks, but that of the block stream is high in case of intrusive rocks with the talus line. 2)From the viewpoint of bedrocks, the distribution density of talus is high in case of the bedrock with high density of joints, sheeting structures and hard rocks, but that of the block stream is high in case of inrtusive rocks with the talus line. 2) From the viewpoint of distribution altitude, talus is mainly distributed in the 301~500 meters part above the sea level, while the block stream is distributed in the 101~300 meters part. 3) From the viewpoint of slope oriention, the distribution density of talus on the slope facing the south(S, SE, SW) is a little higher than that of talus on the slope facing the north(N, NE, NW). 2. The Pattern Characteristics 1) The tongue-shaped type among the four types is the most in number. 2) The average length of talus slope is 99 meters, especially that of talus composed of hornfels or granodiorite is longer. Foth the former is easy to make free face; the latter is easdy to produce round stones. The average length of block stream slope is 145 meters, the longest of all is one km(granodiorite). 3) The gradient of talus slope is 20~45${^\circ}$, most of them 26-30${^\croc}$; but talus composed of intrusive rocks is gentle. 4) The slope pattern of talus shows concave slope, which means readjustment of constituent debris. Some of the block stream slope patterns show concave slope at the upper slope and the lower slope, but convex slope at the middle slope; others have uneven slope. 3. The deposit characteristics 1) The average length of constituent debris is 48~172 centimeters in diameter, the sorting of debris is not bad without matrix. That of block stream is longer than that of talus; this difference of debris average diameter is funda-mentally caused by joint space of bedrocks. 2) The shape of constituent debris in talus is mainly angular, but that of the debris composed of intrusive rocks is sub-angular. The shape of constituent debris in block stream is mainly sub-roundl. 3) IN case dof talus, debris diameter is generally increasing with downward slope, but some of them are disordered and the debris diameter of the sides are larger than that of the middle part on a landform surface. In block stream, debris diameter variation is perpendicularly disordered, and the debris diameter of the middle part is generally larger than that of the sides on a landform surface. 4)The long axis orientation of debris is a not bad at the lower part of the slope in talus (only 2 of 6 talus). In block stream(2 of 3), one is good in sorting; another is not bad. The researcher thinks that the latter was caused by the collapse of constituent debris. 5) Most debris were weathered and some are secondly weathered in situ, but talus composed of fresh debris is developing. 4. The landform development of debris slopes and the arrangement types of the mountain slope 1) The formation and development period of talus is divided into two periods. The first period is formation period of talus9the last glacial period), the second period is adjustment period(postglacial age). And that of block stream is divided into three periods: the first period is production period of blocks(tertiary, interglacial period), the second formation period of block stream(the last glacial period), and the third adjustment period of block stream(postglacialage). 2) The arrangement types of mountain slope are divided into six types in this research area, which are as follows. Type I; high level convex slope-free face-talus-block stream-alluvial surface Type II: high level convex slope-free face-talus-alluvial surface Type III: free face-talus-block stream-all-uvial surface Type IV: free face-talus-alluval surface Type V: talus-alluval surface Type VI: block stream-alluvial surface Particularly, type IV id\s basic type of all; others are modified ones.

  • PDF

A Study on the Meaning and Future of the Moon Treaty (달조약의 의미와 전망에 관한 연구)

  • Kim, Han-Taek
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.21 no.1
    • /
    • pp.215-236
    • /
    • 2006
  • This article focused on the meaning of the 1979 Moon Treaty and its future. Although the Moon Treaty is one of the major 5 space related treaties, it was accepted by only 11 member states which are non-space powers, thus having the least enfluences on the field of space law. And this article analysed the relationship between the 1979 Moon Treay and 1967 Space Treaty which was the first principle treaty, and searched the meaning of the "Common Heritage of Mankind(hereinafter CHM)" stipulated in the Moon treaty in terms of international law. This article also dealt with the present and future problems arising from the Moon Treaty. As far as the 1967 Space Treaty is concerned the main standpoint is that outer space including the moon and the other celestial bodies is res extra commercium, areas not subject to national appropriation like high seas. It proclaims the principle non-appropriation concerning the celestial bodies in outer space. But the concept of CHM stipulated in the Moon Treaty created an entirely new category of territory in international law. This concept basically conveys the idea that the management, exploitation and distribution of natural resources of the area in question are matters to be decided by the international community and are not to be left to the initiative and discretion of individual states or their nationals. Similar provision is found in the 1982 Law of the Sea Convention that operates the International Sea-bed Authority created by the concept of CHM. According to the Moon Treaty international regime will be established as the exploitation of the natural resources of the celestial bodies other than the Earth is about to become feasible. Before the establishment of an international regime we could imagine moratorium upon the expoitation of the natural resources on the celestial bodies. But the drafting history of the Moon Treaty indicates that no moratorium on the exploitation of natural resources was intended prior to the setting up of the international regime. So each State Party could exploit the natural resources bearing in mind that those resouces are CHM. In this respect it would be better for Korea, now not a party to the Moon Treaty, to be a member state in the near future. According to the Moon Treaty the efforts of those countries which have contributed either directly or indirectly the exploitation of the moon shall be given special consideration. The Moon Treaty, which although is criticised by some space law experts represents a solid basis upon which further space exploration can continue, shows the expression of the common collective wisdom of all member States of the United Nations and responds the needs and possibilities of those that have already their technologies into outer space.

  • PDF

Analysis of Twitter for 2012 South Korea Presidential Election by Text Mining Techniques (텍스트 마이닝을 이용한 2012년 한국대선 관련 트위터 분석)

  • Bae, Jung-Hwan;Son, Ji-Eun;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.141-156
    • /
    • 2013
  • Social media is a representative form of the Web 2.0 that shapes the change of a user's information behavior by allowing users to produce their own contents without any expert skills. In particular, as a new communication medium, it has a profound impact on the social change by enabling users to communicate with the masses and acquaintances their opinions and thoughts. Social media data plays a significant role in an emerging Big Data arena. A variety of research areas such as social network analysis, opinion mining, and so on, therefore, have paid attention to discover meaningful information from vast amounts of data buried in social media. Social media has recently become main foci to the field of Information Retrieval and Text Mining because not only it produces massive unstructured textual data in real-time but also it serves as an influential channel for opinion leading. But most of the previous studies have adopted broad-brush and limited approaches. These approaches have made it difficult to find and analyze new information. To overcome these limitations, we developed a real-time Twitter trend mining system to capture the trend in real-time processing big stream datasets of Twitter. The system offers the functions of term co-occurrence retrieval, visualization of Twitter users by query, similarity calculation between two users, topic modeling to keep track of changes of topical trend, and mention-based user network analysis. In addition, we conducted a case study on the 2012 Korean presidential election. We collected 1,737,969 tweets which contain candidates' name and election on Twitter in Korea (http://www.twitter.com/) for one month in 2012 (October 1 to October 31). The case study shows that the system provides useful information and detects the trend of society effectively. The system also retrieves the list of terms co-occurred by given query terms. We compare the results of term co-occurrence retrieval by giving influential candidates' name, 'Geun Hae Park', 'Jae In Moon', and 'Chul Su Ahn' as query terms. General terms which are related to presidential election such as 'Presidential Election', 'Proclamation in Support', Public opinion poll' appear frequently. Also the results show specific terms that differentiate each candidate's feature such as 'Park Jung Hee' and 'Yuk Young Su' from the query 'Guen Hae Park', 'a single candidacy agreement' and 'Time of voting extension' from the query 'Jae In Moon' and 'a single candidacy agreement' and 'down contract' from the query 'Chul Su Ahn'. Our system not only extracts 10 topics along with related terms but also shows topics' dynamic changes over time by employing the multinomial Latent Dirichlet Allocation technique. Each topic can show one of two types of patterns-Rising tendency and Falling tendencydepending on the change of the probability distribution. To determine the relationship between topic trends in Twitter and social issues in the real world, we compare topic trends with related news articles. We are able to identify that Twitter can track the issue faster than the other media, newspapers. The user network in Twitter is different from those of other social media because of distinctive characteristics of making relationships in Twitter. Twitter users can make their relationships by exchanging mentions. We visualize and analyze mention based networks of 136,754 users. We put three candidates' name as query terms-Geun Hae Park', 'Jae In Moon', and 'Chul Su Ahn'. The results show that Twitter users mention all candidates' name regardless of their political tendencies. This case study discloses that Twitter could be an effective tool to detect and predict dynamic changes of social issues, and mention-based user networks could show different aspects of user behavior as a unique network that is uniquely found in Twitter.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

The Implications of Changes in Learning of East Coast Gut Successors (동해안굿 전승자 학습 변화의 의미)

  • Jung, Youn-rak
    • (The) Research of the performance art and culture
    • /
    • no.36
    • /
    • pp.441-471
    • /
    • 2018
  • East Coast Gut, Korean shamanism ritual on its east coastal area, is a Gut held in fishing villages alongside Korean east coastal area from Goseong area in Gangwon-Do to Busan area. East Coast Gut is performed in a series mainly by a successor shaman, Korean shaman, who hasn't received any spiritual power from a God, and the implications of this thesis lie in that we look over the learning aspects of Seokchool Kim shaman group among other East Coast Gut successor shaman groups after dividing it into 2 categories, successor shaman and learner shaman and based upon this, we reveal the meaning of the learning aspects of East Coast Gut. For successor shamans, home means the field of education. Since they are little, they chased Gut events performing dance in a series to accumulate onsite experiences. However, in the families of successor shamans that have passed their shaman work down from generation to generation, their descendents didn't inherit shaman work any longer, which changed the way of succession and learning of shaman work. Since 1980's, Gut has been officially acknowledged as a kind of general art embracing songs, dance and music and designated as a cultural asset of the state and each city and province, and at art universities, it was adopted as a required course for its related major, which caused new learner shamans who majored in shamanism to emerge. These learner shamans are taking systematical succession lessons on the performance skills of East Coast Byeolshin Gut at universities, East Coast Byeolshin Gut preservation community, any places where Guts are held and etc.. As changes along time, the successor shamans accepted the learner shamans to pass shaman work down and changes appeared in the notion of towners who accept the performer groups of Gut and Gut itself. Unlike the past, as Gut has been acknowledged as the origin of Korean traditional arts and as the product of compresensive learning on songs, dance and music and it was designated as a national intangible cultural asset, shaman's social status and personal pride and dignity has become very high. As shaman has become positioned as the traditional artist getting both national and international recognition unlike its past image of getting despised, at the site of Gut event or even in the relation with towners, their status and the treatment they get became far different. Even towners, along with shift in shaman groups' generation, take position to acknowledge and accept the addition of new learning elements unlike the past. Even in every town, rather than just insisting on the type or the event purpose of traditional Gut, they think over on the type of festival and the main direction of a variety of Guts with which all of towners can mingle with each other. They are trying to find new meanings in the trend of changing Gut and the adaptation of new generation to this. In our reality of Gut events getting minimalized along with rapid change of times, East Coast Gut is still very actively performed in a series until now compared to Guts in other regions. This is because following the successor shamans who have struggled to preserve the East Coast Gut, the learner shamans are actively inflowing and the series performance groups preserve the origin of Gut and try hard to use Gut as art contents. Besides, the learner shamans systematically organize what they learned on shamanism from the successor shamans and get prepared and try to hand it down to descendents in the closest possible way to preserve its origin. In the future, East Coast Gut will be succeeded by the learner shamans from the last successor shamans to inherit its tradition and develop it to adapt to the times.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.