• Title/Summary/Keyword: Field

Search Result 75,298, Processing Time 0.1 seconds

Improvement of Certification Criteria based on Analysis of On-site Investigation of Good Agricultural Practices(GAP) for Ginseng (인삼 GAP 인증기준의 현장실천평가결과 분석에 따른 인증기준 개선방안)

  • Yoon, Deok-Hoon;Nam, Ki-Woong;Oh, Soh-Young;Kim, Ga-Bin
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.1
    • /
    • pp.40-51
    • /
    • 2019
  • Ginseng has a unique production system that is different from those used for other crops. It is subject to the Ginseng Industry Act., requires a long-term cultivation period of 4-6 years, involves complicated cultivation characteristics whereby ginseng is not produced in a single location, and many ginseng farmers engage in mixed-farming. Therefore, to bring the production of Ginseng in line with GAP standards, it is necessary to better understand the on-site practices of Ginseng farmers according to established control points, and to provide a proper action plan for improving efficiency. Among ginseng farmers in Korea who applied for GAP certification, 77.6% obtained it, which is lower than the 94.1% of farmers who obtained certification for other products. 13.7% of the applicants were judged to be unsuitable during document review due to their use of unregistered pesticides and soil heavy metals. Another 8.7% of applicants failed to obtain certification due to inadequate management results. This is a considerably higher rate of failure than the 5.3% incompatibility of document inspection and 0.6% incompatibility of on-site inspection, which suggests that it is relatively more difficult to obtain GAP certification for ginseng farming than for other crops. Ginseng farmers were given an average of 2.65 points out of 10 essential control points and a total 72 control points, which was slightly lower than the 2.81 points obtained for other crops. In particular, ginseng farmers were given an average of 1.96 points in the evaluation of compliance with the safe use standards for pesticides, which was much lower than the average of 2.95 points for other crops. Therefore, it is necessary to train ginseng farmers to comply with the safe use of pesticides. In the other essential control points, the ginseng farmers were rated at an average of 2.33 points, lower than the 2.58 points given for other crops. Several other areas of compliance in which the ginseng farmers also rated low in comparison to other crops were found. These inclued record keeping over 1 year, record of pesticide use, pesticide storages, posts harvest storage management, hand washing before and after work, hygiene related to work clothing, training of workers safety and hygiene, and written plan of hazard management. Also, among the total 72 control points, there are 12 control points (10 required, 2 recommended) that do not apply to ginseng. Therefore, it is considered inappropriate to conduct an effective evaluation of the ginseng production process based on the existing certification standards. In conclusion, differentiated certification standards are needed to expand GAP certification for ginseng farmers, and it is also necessary to develop programs that can be implemented in a more systematic and field-oriented manner to provide the farmers with proper GAP management education.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Study on Anjoon-gut Music in Daejeon - Focused on Sir Shin Seok-bong's Antaek-gut Music- (대전의 앉은굿 음악 연구 - 신석봉 법사의 안택굿을 중심으로 -)

  • Park, Hye-jeong
    • Korean Journal of Heritage: History & Science
    • /
    • v.38
    • /
    • pp.5-42
    • /
    • 2005
  • Based on a field investigation of intangible cultural asset # 2, author Sir Shin Seok-bong of Daejeon Metropolitan Cityinvestigated the music of Antaek-gut, which is the base and core of Anjoon-gut, and found the following musical features: A Jang-gu(drum) and Kkoaengkwari(gong) were used to recitethe Sutra(kyungmoon) of Anjoon-gut. The Jang-gu, located on the right side, played an accompaniment role with regular beats when the Sutra was recited. The Kkoaengkwari, located on the left side, played the role of covering the caesura of the Sutra passages, so it is played with various rhythmic variations in accordance with Kojang(鼓杖). This is one way of playing Korean national classical music that has temporary caesuras, depending on the rector's bre! ath or the contents of a Sutra during the Sutra chanting, with the Jang-gu covering the pause with its variation. In other words, when being played in concert, the instruments that play the main melody are at rest while another instrument takes its turn to play the main melody as a form of prolonged sound. The rhythmic cycles of the sutras of Antaek-gut recited with this instrumental accompaniment consist of five types; a) Woemarch-jangdan (a single beat) of 4 meter by 3 bit, b) Dumarch-jangdan (two beats) of 8 meter by 3 bit, c) Saemarch-jangdan (three beats) of 4 meter by 3 bit with a fast tempo, d) Mak-gojang, uniform beats with a standardized rhythm, and e) incomplete beats deviated from the regular beats. Sir Shin Seok-bong chanted Chang (唱), a traditional native song which he called 'Cheong (淸)' with a cycle of 'Dumarch-jangdan' throughout the places of Antaek-gut. Only 'Toesonggyung' a chant for the gate that was the last location of the Antaek, was chanted with a cycle of 'Woemarch-jan! gdan'. In addition, 'Saemarch-jangdan' and 'Mak-gojang' that had comparatively faster tempos than the former two jangdans, were played without a chant when a female shaman was dancing and catching her spirit-invoking wand. The 'Saemarch-jangdan', particularly, was played while dancing began at a relatively slow tempo, then proceeded at a violent tempo and then back again to the slow tempo. This shows one of the representative tempos of our music with a slow-fast-slow tempo. The organizational tones were 'mi-la-do'-re'', and its key tones of 'mi-la-do'' were performed with perfect fourth and minor third, which was the same as those of Menari-tori. However, it did not show a typical Sigimse, an ornamental tone, of Menari-tory, whose first tone, 'mi', is vibrated and its Sigimse is gliding down from the tone 're' to 'do'. That is because the regional tone-tori of Chungcheong-do have a relatively weaker musical expression than that of Gyeongsang-do. In addition, the rhythmic types in accordance with the words of a song for the Antaek-gut music had a comparatively faster tempo than the other sutras. Also, it was only with 'Toesonggyeong' that the tone 'la' continuously appeared throughout the melody and showed 'a syllabic rhythm', while other places consisted of either a 'syncopation' or 'melismatic' rhythm. Finally, according to a brief investigation of the tone organization in accordance with each sutra, the tone 'la' was given more weight. The tone procedure showed a mainly ascending 'la-do'' and the descending 'la-mi' with minor third and perfect fourth. Also, the overall tempo proceeded with M.M.♩.=116-184, while the tempo for the Gut proceeded with M.M.♩.=120-140, which was suitable for reciting a Sutra.

A Study on Management of Records for Accountability of University student body's autonomy activity - Focused on Myongji University's student body - (대학 총학생회 자치활동의 설명책임성을 위한 기록관리 방안 연구 - 명지대학교 총학생회를 중심으로 -)

  • Lee, Yu Bin;Lee, Seung Hwi
    • The Korean Journal of Archival Studies
    • /
    • no.29
    • /
    • pp.175-223
    • /
    • 2011
  • A university is an organization charged with publicity and has accountability to the community for the operating process. Students account for a majority of members in a university. In universities, numerous creatures are pouring out every year and university students are major producers of these records. However, roles and functions of university students producing enormous amount of records as main agents of universities and focused concentration on produced records have not been made yet. It is reality that from the archival point of view, the importance of produced records of which main agents are university students has been relatively underestimated. In this background, this study attempted approach in archival point of view on records produced by university students, main agents. There are various types of records that university students produce such as records produced in the process of research and teaching as well as records produced in the process of various autonomy activities like clubs, students' associations. This study especially focused on university student autonomy activity process and placed emphasis on accountability securing measures on autonomy activity process of university students. To secure accountability of activities, records management should be based. Therefore, as a way to ensure accountability of unversity students autonomy activity, we tried to present records management systematization and records utilization measures. For this, a student body, a university student autonomy organization was analyzed and a student body of Myongji University Humanities Campus was selected as a specific target. First, to identify records management status, activities and organization and functions of the student body, we conducted an interview with the president of the student body. Through this, we analyzed the activities of the university student body and examined the necessity of accountability accordingly. Also, we derived the types and characteristics of records to be produced at each stage by analyzing the organization and functions of the student body of Myongji University. Like this, after deriving the types of production records according to the necessity, organization and functions of accountability and activities of the student body, we analyzed records management status of the present student body. First, to identify the general process status of activities of the student body, we analyzed activity process by stage of the student body of Myongji University. And we analyzed records management method of the student body and responsibility principal and conducted real condition analysis. Through this analysis, we presented the measures to ensure accountability of a university student body in three categories such as systematization of records management process, establishment of records management infrastructure, accountability guarantee measures. This study discussed accountability on society by analyzing activities and functions of a student body, targeting a student body, an autonomy organization of university students. And as a measure to secure accountability of a student body, we proposed a model for records management environment settlement. But in terms that a student body is an organization operated in one year basis, there is a limit that records management environment is hard to settle. This study pointed out this limit and was to provide clues when more active researches were carried out in the field of student records management in the future through presentation of student body records management model. Also, it is expected that the analysis results derived from this research will have significance in terms of school history arrangement and conservation.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

A Study on the Seawater Filtration Characteristics of Single and Dual-filter Layer Well by Field Test (현장실증시험에 의한 단일 및 이중필터층 우물의 해수 여과 특성 연구)

  • Song, Jae-Yong;Lee, Sang-Moo;Kang, Byeong-Cheon;Lee, Geun-Chun;Jeong, Gyo-Cheol
    • The Journal of Engineering Geology
    • /
    • v.29 no.1
    • /
    • pp.51-68
    • /
    • 2019
  • This study performs to evaluate adaptability of seashore filtering type seawater-intake which adapts dua1 filter well alternative for direct seawater-intake. This study varies filter condition of seashore free surface aquifer which is composed of sand layer then installs real size dual filter well and single filter well to evaluate water permeability and proper pumping amount according to filter condition. According to result of step aquifer test, it is analysed that 110.3% synergy effect of water permeability coefficient is happened compare to single filter since dual filter well has better improvement. dual filter has higher water permeability coefficient compare to same pumping amount, this means dual filter has more improved water permeability than single filter. According to analysis result of continuous aquifer test, it is evaluated that dual filter well (SD1200) has higher water permeability than single filter well (SS800) by analysis of water permeability coefficient using monitoring well and gauging well, it is also analysed dual filter has 110.7% synergy effect of water permeability coefficient. As a evaluation result of pumping amount according to analysis of water level dropping rate, it is analysed that dual filter well increased 122.8% pumping amount compare to single filter well when water level dropping is 2.0 m. As a result of calculating proper pumping amount using water level dropping rate, it is analysed that dual filter well shows 136.0% higher pumping amount compare to single filter well. It is evaluated that proper pumping amount has 122.8~160% improvement compare to single filter, pumping amount improvement rate is 139.6% compare to averaged single filter. In other words, about 40% water intake efficiency can be improved by just installation of dual filter compare to normal well. Proper pumping amount of dual filter well using inflection point is 2843.3 L/min and it is evaluated that daily seawater intake amount is about $4,100m^3/day$ (${\fallingdotseq}4094.3m^3/day$) in one hole of dual filter well. Since it is possible to intake plenty of water in one hole, higher adaptability is anticipated. In case of intaking seawater using dual filter well, no worries regarding damages on facilities caused by natural disaster such as severe weather or typhoon, improvement of pollution is anticipated due to seashore sand layer acts like filter. Therefore, It can be alternative of environmental issue for existing seawater intake technique, can save maintenance expenses related to installation fee or damages and has excellent adaptability in economic aspect. The result of this study will be utilized as a basic data of site demonstration test for adaptation of riverside filtered water of upcoming dual filter well and this study is also anticipated to present standard of well design and construction related to riverside filter and seashore filter technique.

Development of Beauty Experience Pattern Map Based on Consumer Emotions: Focusing on Cosmetics (소비자 감성 기반 뷰티 경험 패턴 맵 개발: 화장품을 중심으로)

  • Seo, Bong-Goon;Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.179-196
    • /
    • 2019
  • Recently, the "Smart Consumer" has been emerging. He or she is increasingly inclined to search for and purchase products by taking into account personal judgment or expert reviews rather than by relying on information delivered through manufacturers' advertising. This is especially true when purchasing cosmetics. Because cosmetics act directly on the skin, consumers respond seriously to dangerous chemical elements they contain or to skin problems they may cause. Above all, cosmetics should fit well with the purchaser's skin type. In addition, changes in global cosmetics consumer trends make it necessary to study this field. The desire to find one's own individualized cosmetics is being revealed to consumers around the world and is known as "Finding the Holy Grail." Many consumers show a deep interest in customized cosmetics with the cultural boom known as "K-Beauty" (an aspect of "Han-Ryu"), the growth of personal grooming, and the emergence of "self-culture" that includes "self-beauty" and "self-interior." These trends have led to the explosive popularity of cosmetics made in Korea in the Chinese and Southeast Asian markets. In order to meet the customized cosmetics needs of consumers, cosmetics manufacturers and related companies are responding by concentrating on delivering premium services through the convergence of ICT(Information, Communication and Technology). Despite the evolution of companies' responses regarding market trends toward customized cosmetics, there is no "Intelligent Data Platform" that deals holistically with consumers' skin condition experience and thus attaches emotions to products and services. To find the Holy Grail of customized cosmetics, it is important to acquire and analyze consumer data on what they want in order to address their experiences and emotions. The emotions consumers are addressing when purchasing cosmetics varies by their age, sex, skin type, and specific skin issues and influences what price is considered reasonable. Therefore, it is necessary to classify emotions regarding cosmetics by individual consumer. Because of its importance, consumer emotion analysis has been used for both services and products. Given the trends identified above, we judge that consumer emotion analysis can be used in our study. Therefore, we collected and indexed data on consumers' emotions regarding their cosmetics experiences focusing on consumers' language. We crawled the cosmetics emotion data from SNS (blog and Twitter) according to sales ranking ($1^{st}$ to $99^{th}$), focusing on the ample/serum category. A total of 357 emotional adjectives were collected, and we combined and abstracted similar or duplicate emotional adjectives. We conducted a "Consumer Sentiment Journey" workshop to build a "Consumer Sentiment Dictionary," and this resulted in a total of 76 emotional adjectives regarding cosmetics consumer experience. Using these 76 emotional adjectives, we performed clustering with the Self-Organizing Map (SOM) method. As a result of the analysis, we derived eight final clusters of cosmetics consumer sentiments. Using the vector values of each node for each cluster, the characteristics of each cluster were derived based on the top ten most frequently appearing consumer sentiments. Different characteristics were found in consumer sentiments in each cluster. We also developed a cosmetics experience pattern map. The study results confirmed that recommendation and classification systems that consider consumer emotions and sentiments are needed because each consumer differs in what he or she pursues and prefers. Furthermore, this study reaffirms that the application of emotion and sentiment analysis can be extended to various fields other than cosmetics, and it implies that consumer insights can be derived using these methods. They can be used not only to build a specialized sentiment dictionary using scientific processes and "Design Thinking Methodology," but we also expect that these methods can help us to understand consumers' psychological reactions and cognitive behaviors. If this study is further developed, we believe that it will be able to provide solutions based on consumer experience, and therefore that it can be developed as an aspect of marketing intelligence.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Major Class Recommendation System based on Deep learning using Network Analysis (네트워크 분석을 활용한 딥러닝 기반 전공과목 추천 시스템)

  • Lee, Jae Kyu;Park, Heesung;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.95-112
    • /
    • 2021
  • In university education, the choice of major class plays an important role in students' careers. However, in line with the changes in the industry, the fields of major subjects by department are diversifying and increasing in number in university education. As a result, students have difficulty to choose and take classes according to their career paths. In general, students choose classes based on experiences such as choices of peers or advice from seniors. This has the advantage of being able to take into account the general situation, but it does not reflect individual tendencies and considerations of existing courses, and has a problem that leads to information inequality that is shared only among specific students. In addition, as non-face-to-face classes have recently been conducted and exchanges between students have decreased, even experience-based decisions have not been made as well. Therefore, this study proposes a recommendation system model that can recommend college major classes suitable for individual characteristics based on data rather than experience. The recommendation system recommends information and content (music, movies, books, images, etc.) that a specific user may be interested in. It is already widely used in services where it is important to consider individual tendencies such as YouTube and Facebook, and you can experience it familiarly in providing personalized services in content services such as over-the-top media services (OTT). Classes are also a kind of content consumption in terms of selecting classes suitable for individuals from a set content list. However, unlike other content consumption, it is characterized by a large influence of selection results. For example, in the case of music and movies, it is usually consumed once and the time required to consume content is short. Therefore, the importance of each item is relatively low, and there is no deep concern in selecting. Major classes usually have a long consumption time because they have to be taken for one semester, and each item has a high importance and requires greater caution in choice because it affects many things such as career and graduation requirements depending on the composition of the selected classes. Depending on the unique characteristics of these major classes, the recommendation system in the education field supports decision-making that reflects individual characteristics that are meaningful and cannot be reflected in experience-based decision-making, even though it has a relatively small number of item ranges. This study aims to realize personalized education and enhance students' educational satisfaction by presenting a recommendation model for university major class. In the model study, class history data of undergraduate students at University from 2015 to 2017 were used, and students and their major names were used as metadata. The class history data is implicit feedback data that only indicates whether content is consumed, not reflecting preferences for classes. Therefore, when we derive embedding vectors that characterize students and classes, their expressive power is low. With these issues in mind, this study proposes a Net-NeuMF model that generates vectors of students, classes through network analysis and utilizes them as input values of the model. The model was based on the structure of NeuMF using one-hot vectors, a representative model using data with implicit feedback. The input vectors of the model are generated to represent the characteristic of students and classes through network analysis. To generate a vector representing a student, each student is set to a node and the edge is designed to connect with a weight if the two students take the same class. Similarly, to generate a vector representing the class, each class was set as a node, and the edge connected if any students had taken the classes in common. Thus, we utilize Node2Vec, a representation learning methodology that quantifies the characteristics of each node. For the evaluation of the model, we used four indicators that are mainly utilized by recommendation systems, and experiments were conducted on three different dimensions to analyze the impact of embedding dimensions on the model. The results show better performance on evaluation metrics regardless of dimension than when using one-hot vectors in existing NeuMF structures. Thus, this work contributes to a network of students (users) and classes (items) to increase expressiveness over existing one-hot embeddings, to match the characteristics of each structure that constitutes the model, and to show better performance on various kinds of evaluation metrics compared to existing methodologies.

Effect of Carbon Couch Side Rail and Vac-lok In case of Lung RPO irradiation (Lung RPO 선량전달시, Carbon Couch Side Rail과 Vac-lok이 미치는 영향)

  • Kim, Seok Min;Gwak, Geun Tak;Lee, Seung Hun;Kim, Jung Soo;Kwon, Hyoung Cheol;Kim, Yang Su;Lee, Sun Young
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.27-34
    • /
    • 2018
  • Purpose : To evaluate the effect of carbon couch side rail and vacuum immobilization device in case of lung RPO irradiation. Materials and Methods : The 10, 20, 30 mm thickness of vac-lok's right side were obtained. To measure of doses, glass dosimeters were used and measured reference point is left lung center at the phantom. A, B, C, and D points are left, right, down, and up directions based on the center point. In the state of Side-Rail-Out, place the without vac-lok, with the thickness of 10, 20, and 30 mm vac-lok. After the glass dosimeters was inserted in center, A, B, C, and D points, 100 MU of 6 MV X-ray were irradiated to the referenced center point in the condition of $10{\times}10cm^2$ field size, SAD 100 cm, gantry angle 225, 300 MU/min dose rate. Five measurements were made for each point. In the state of Side-Rail-In, five measurement were made for each point under the same conditions. The average is measured on each of the five Side-Rail-Out and Side-Rail-In measurements. Results : In the presence of side rail, the dose reduction ratio was -11.8 %, -12.3 %, -4.1 %, -12.3 %, -7.3 % for each A, B, C, and D points. In the state of Side-Rail-Out, the dose reduction ratio for the using 10 mm thickness of vac-lok was -0.9 % than without vac-lok. The dose reduction ratio for the using 20 mm thickness of vac-lok was -2.0 %, for the using 30 mm thickness of the vac-lok was -3.0 % than without vac-lok. In the state of Side-Rail-In, the dose reduction ratio for the using 10 mm thickness of vac-lok was -1.0 % than without vac-lok. The dose reduction ratio for the using 20 mm vac-lok was -2.1 %, for the using 30 mm vac-lok was -3.0 % than without vac-lok. Based on the value of no vac-lok dose in the Side-Rail-In state, The dose reduction ratios for the using 10 mm, 20 mm and 30 mm thickness of vac-loks In the Side-Rail-Out that the center point were -12.7 %, -13.7 %, -14.2 % and -12.8 %, -13.8 %, -14.5 % respectively at point A. The dose reduction ratios for the same conditions to the B point were -4.9 %, -6.1 %, -7.1 % and -13.4 %, -14.4 %, -15.5 % respectively at point C. The dose reduction ratios for the same conditions to the D point were -8.4 %, -9.0 %, -10.4 % respectively. Conclusion : The attenuation was caused by presence of side rails and thickness of vac-lok. Pay attention to these attenuation factors, making it a more effective radiation therapy.

  • PDF