• Title/Summary/Keyword: Index number system

Search Result 751, Processing Time 0.033 seconds

EFFICIENCY OF ENERGY TRANSFER BY A POPULATION OF THE FARMED PACIFIC OYSTER, CRASSOSTREA GIGAS IN GEOJE-HANSAN BAY (거제${\cdot}$한산만 양식굴 Crassostrea gigas의 에너지 전환 효율)

  • KIM Yong Sool
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.13 no.4
    • /
    • pp.179-183
    • /
    • 1980
  • The efficiency of energy transfer by a population of the farmed pacific oyster, Crassostrea gigas was studied during culture period of 10 months July 1979-April 1980, in Geoje-Hansan Bay near Chungmu City. Energy use by the farmed oyster population was calculated from estimates of half-a-month unit age specific natural mortality rate and data on growth, gonad output, shell organic matter production and respiration. Total mortality during the culture period was estimated approximate $36\%$ from data on survivor individual number per cluster. Growth may be dual consisted of a curved line during the first half culture period (July-November) and a linear line in the later half period (December-April). The first half growth was approximated by the von Bertalanffy growth model; shell height, $SH=6.33\;(1-e^{0.2421(t+0.54)})$, where t is age in half-a-month unit. In the later half growth period shell height was related to t by SH=4.44+0.14t. Dry meat weight (DW) was related to shell height by log $DW=-2.2907+2.589{\cdot}log\;SH,\;(2, and/or log $DW=-5.8153+7.208{\cdot}log\;SH,\;(5. Size specific gonad output (G) as calculated by condition index of before and after the spawning season, was related to shell height by $G=0.0145+(3.95\times10^{-3}{\times}SH^{2.9861})$. Shell organic matter production (SO) was related to shell height by log $SO=-3.1884+2.527{\cdot}1og\;SH$. Size and temperature specific respiration rate (R) as determined in biotron system with controlled temperature, was related to dry meat weight and temperature (T) by log $R=(0.386T-0.5381)+(0.6409-0.0083T){\cdot}log\;DW$. The energy used in metabolism was calculated from size, temperature specific respiration and data on body composition. The calorie contents of oyster meat were estimated by bomb calorimetry based on nitrogen correction. The assimilation efficiency of the oyster estimated directly by a insoluble crude silicate method gave $55.5\%$. From the information presently available by other workers, the assimilation efficiency ranges between $40\%\;and\;70\%$. Twenty seven point four percent of the filtered food material expressed by energy value for oyster population was estimated to have been rejected as pseudofaeces : $17.2\%$ was passed as faeces; $35.04\%$ was respired and lost as heat; $0.38\%$ was bounded up in shell organics; $2.74\%$ was released as gonad output, $2.06\%$ was fell as meat reducing by mortality. The remaining $15.28\%$ was used as meat production. The net efficiency of energy transfer from assimilation to meat production (yield/assimilation) of a farm population of the oyster was estimated to be $28\%$ during culture period July 1979-April 1980. The gross efficiency of energy transfer from ingestion to meat production (yield/food filtered) is probably between $11\%\;and\;20\%$.

  • PDF

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

Issue tracking and voting rate prediction for 19th Korean president election candidates (댓글 분석을 통한 19대 한국 대선 후보 이슈 파악 및 득표율 예측)

  • Seo, Dae-Ho;Kim, Ji-Ho;Kim, Chang-Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.199-219
    • /
    • 2018
  • With the everyday use of the Internet and the spread of various smart devices, users have been able to communicate in real time and the existing communication style has changed. Due to the change of the information subject by the Internet, data became more massive and caused the very large information called big data. These Big Data are seen as a new opportunity to understand social issues. In particular, text mining explores patterns using unstructured text data to find meaningful information. Since text data exists in various places such as newspaper, book, and web, the amount of data is very diverse and large, so it is suitable for understanding social reality. In recent years, there has been an increasing number of attempts to analyze texts from web such as SNS and blogs where the public can communicate freely. It is recognized as a useful method to grasp public opinion immediately so it can be used for political, social and cultural issue research. Text mining has received much attention in order to investigate the public's reputation for candidates, and to predict the voting rate instead of the polling. This is because many people question the credibility of the survey. Also, People tend to refuse or reveal their real intention when they are asked to respond to the poll. This study collected comments from the largest Internet portal site in Korea and conducted research on the 19th Korean presidential election in 2017. We collected 226,447 comments from April 29, 2017 to May 7, 2017, which includes the prohibition period of public opinion polls just prior to the presidential election day. We analyzed frequencies, associative emotional words, topic emotions, and candidate voting rates. By frequency analysis, we identified the words that are the most important issues per day. Particularly, according to the result of the presidential debate, it was seen that the candidate who became an issue was located at the top of the frequency analysis. By the analysis of associative emotional words, we were able to identify issues most relevant to each candidate. The topic emotion analysis was used to identify each candidate's topic and to express the emotions of the public on the topics. Finally, we estimated the voting rate by combining the volume of comments and sentiment score. By doing above, we explored the issues for each candidate and predicted the voting rate. The analysis showed that news comments is an effective tool for tracking the issue of presidential candidates and for predicting the voting rate. Particularly, this study showed issues per day and quantitative index for sentiment. Also it predicted voting rate for each candidate and precisely matched the ranking of the top five candidates. Each candidate will be able to objectively grasp public opinion and reflect it to the election strategy. Candidates can use positive issues more actively on election strategies, and try to correct negative issues. Particularly, candidates should be aware that they can get severe damage to their reputation if they face a moral problem. Voters can objectively look at issues and public opinion about each candidate and make more informed decisions when voting. If they refer to the results of this study before voting, they will be able to see the opinions of the public from the Big Data, and vote for a candidate with a more objective perspective. If the candidates have a campaign with reference to Big Data Analysis, the public will be more active on the web, recognizing that their wants are being reflected. The way of expressing their political views can be done in various web places. This can contribute to the act of political participation by the people.

The Work and Job Satisfaction of Paramedics in the Emergency Room of University Hospitals (대학병원 응급실 내 1급 응급구조사의 업무와 직무만족도)

  • Lee, Ok-Hee
    • The Korean Journal of Emergency Medical Services
    • /
    • v.15 no.1
    • /
    • pp.47-63
    • /
    • 2011
  • Purpose : This research is to examine the work and job satisfaction of paramedics in the emergency room of university hospitals. This research is done to provide basic data needed for establishing work realms of paramedics in hospitals and to enhance their degree of satisfaction. Methods : Research questionnaire survey was conducted on 141 paramedics working in the emergency room of 32 university hospitals from August 24, 2010 to September 30, 2010 through direct visits and telephone interviews or email to explain the purpose of this research and assurance of confidentiality of responses on the questionnaires. As the tool for the degree of job satisfaction, 'The Index of Work Satisfaction' developed by Slavitt, et al(1978) and revised and supplemented by Soon-shim Kim and Hye-ran Kwon(2002) was used. The collected data were analyzed by evaluating frequency, percentage, mean, standard deviation, t-test and ANOVA, Cronbach's $\alpha$ by using SPSS WIN 18.0 program. Results : 1. Investigating the work and role of paramedics in the emergency room of university hospitals, electrocardiogram(EKG) was found to be highest with $\alpha$ was widely used with the rate of patient evaluation and test area. In the medical treatment for patients area, cardiopulmonary resuscitation(CPR) with 95%(134 persons) and ventilation assistance through ambu bagging(BVM) with 95%(134 persons) were found to be high. $\alpha$ were performed. In the role within the hospital and other areas, a member of CPR team in the hospital accounted for 78%(110 persons). 2. In the measurements of the job satisfaction of paramedics working at university hospitals, the total mean score was 2.91. The mean score in each question area indicated: section on job 3.48, autonomy 3.05, interaction 3.01, organizational demand 2.85, working conditions 2.67, salaries 2.40. This result obviously demonstrates the work of paramedics itself was most satisfied but the salaries were most dissatisfied. 3. In the measurements of the job satisfaction of paramedics working at university hospitals, job satisfaction based on the general characteristics showed significant difference in age (F=6.547, p=.002), gender (F=4.436, p=.000) marital status (F=-3.270, p= .001), religion (F=2.041, p= .043), motive for application (F=3.603, p= .015), and salary (F=6.658, p= .000). 대학병원 응급실 내 1급 응급구조사의 업무와 직무만족도 The Journal of the Korean Society of Emergency Medical Technology Vol. 15 (1) 63 4. In the measurements of the job satisfaction of paramedics working at university hospitals, job satisfaction based on the working environmental characteristics showed significant difference in total number of paramedics (F=3.779, p= .012), form of employment (F=5.601, p= .001), existence or non-existence of intention to change jobs (F=-4.037, p= .000). Conclusion : The work of paramedics in the emergency room of university hospitals consists of lots of treatment processes after specialized diagnosis and performance of professionally subdivided works. However, current legislation does not reflect such circumstances to which paramedics are exposed; thus, it should be considered for further revision and modification. The degree of job satisfaction of paramedics in the emergency room of university hospitals was high but low in salaries and working conditions were the weak points. The measures to enhance their degree of job satisfaction should be taken though improvement of labor conditions such as consideration of the rate of increase in salaries, compensation for overtime work, providing rest areas, improvement of current employment system, and conversion of temporary employees into regular employees.

Development of Science Academic Emotion Scale for Elementary Students (초등학생 과학 학습정서 검사 도구 개발)

  • Kim, Dong-Hyun;Kim, Hyo-Nam
    • Journal of The Korean Association For Science Education
    • /
    • v.33 no.7
    • /
    • pp.1367-1384
    • /
    • 2013
  • The purpose of this study was to develop a Science Academic Emotion Scale for Elementary Students. To make a scale, authors extract a core of 14 emotions related to science learning situations from Kim & Kim (2013) and literature review. Items on the scale consisted of 14 emotions and science learning situations. The first preliminary scale had 174 items on it. The number of 174 items was reduced and elaborated on by three science educators. Authors verified the scale using exploratory factor analysis, confirmatory factor analysis, inter-item consistency and concurrent validity. The second preliminary scale consisted of 141 items. The preliminary scale was reduced to seven factors and 56 items by applying exploratory factor analysis twice. The seven factors include: enjoyment contentment interest, boredom, shame, discontent, anger, anxiety, and laziness. The 56 items were elaborated on by five science educators. The scale with 56 items was fixed with seven factors and 35 items to get the final scale by applying confirmatory factor analysis twice. Except for Chi-square and GFI (Goodness of Fit Index), other various goodness of fit characteristics of the seven factors and 35 items model showed good estimated figures. The Cronbach of the scale was 0.85. The Cronbach of seven factors are 0.95 in enjoyment contentment interest, 0.81 in boredom, 0.87 in shame, 0.82 in discontent, 0.87 in anger, 0.77 in anxiety, 0.81 in laziness. The correlation coefficient was 0.59 in enjoyment contentment interest, 0.54 in anxiety, 0.42 in shame, and 0.28 in boredom, which were estimated using the Science Academic Emotion Scale and National Assessment System of Science-Related Affective Domain (Kim et al., 1998). Based on the results, authors judged that the Science Academic Emotion Scale for Elementary Students achieved an acceptable validity and reliability.

A Study on the Distribution and Dynamics of Relict Forest Trees and Structural Characteristics of Forest Stands in Gangwon Province, Korea (강원지역 산림유존목의 분포, 동태 및 생육임분의 구성적 특성)

  • Shin, Joon-Hwan;Lee, Cheol-Ho;Bae, Kwan-Ho;Cho, Yong-Chan;Kim, Jun-Soo;Cho, Jun-Hee;Cho, Hyun-Je
    • Korean Journal of Environment and Ecology
    • /
    • v.32 no.2
    • /
    • pp.165-175
    • /
    • 2018
  • The purpose of this study is to provide the basic data such as distribution status, growth characteristics, and the structural characteristics of forest stands for the systematic conservation and management of relict forest trees (stem girth of 300cm or larger) established naturally in Gangwon Province, Korea. The survey showed that 434 individuals of 19 species (conifers: 228 individuals of 4 species, broad-leaved trees: 206 individuals of 15 species) were distributed in Gangwon Province, and Taxus cuspidata was the most abundant among them with 203 individuals or about 46.7 % of the total. The stem girth was average of 404cm (conifers: 373cm, broad-leaves: 421cm), and Tilia amurensis with multi-stemmed growing on Sorak mountain range had the largest stem girth at 1,113cm. The average height and the crown width of relict forest trees were 15.4m and 10.0m, respectively. Although the environments of relict forest trees showed a slight difference by species, the relative appearance frequencies of most trees were high in the environments where the altitude was higher than 1,000 m, slope degree was greater than $25^{\circ}$, the slope faced north, and microtopography was at the upper of slopes. Regarding the stand characteristics of relict forest trees per unit area ($/100m^2$), the average total coverage was 294% (max. 475%), the total average number of species was 36 species (max. 60 species), the average species diversity index (H') was 2.560 (max. 3.593), the average canopy closure was 84.8% (max. 94.6%), and the average basal area (/ha) was $52.7m^2$ (max. $116.4m^2$, relict trees $30.0m^2$, and other trees $22.7m^2$). The analysis of the dynamics of the forest stands where relict forest trees were growing showed four types of the maintenance mechanisms of relict forest trees depending on the supply pattern of succeeding trees: "Low-density but persistent type (Quercus mongolica, Abies holophylla, Tilia amurensis, and Pyrus ussuriensis)," "Long ago stopped type (Pinus densiflora)," "Recently stopped type (Abies nephrolepis, Quercus variabilis, and Betula schmidtii)," and "Periodically repeated types of supply and stop (Salix caprea and Quercus serrata).".

A Methodology to Develop a Curriculum based on National Competency Standards - Focused on Methodology for Gap Analysis - (국가직무능력표준(NCS)에 근거한 조경분야 교육과정 개발 방법론 - 갭분석을 중심으로 -)

  • Byeon, Jae-Sang;Ahn, Seong-Ro;Shin, Sang-Hyun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.43 no.1
    • /
    • pp.40-53
    • /
    • 2015
  • To train the manpower to meet the requirements of the industrial field, the introduction of the National Qualification Frameworks(hereinafter referred to as NQF) was determined in 2001 by National Competency Standards(hereinafter referred to as NCS) centrally of the Office for Government Policy Coordination. Also, for landscape architecture in the construction field, the "NCS -Landscape Architecture" pilot was developed in 2008 to be test operated for 3 years starting in 2009. Especially, as the 'realization of a competence-based society, not by educational background' was adopted as one of the major government projects in the Park Geun-Hye government(inaugurated in 2013) the NCS system was constructed on a nationwide scale as a detailed method for practicing this. However, in the case of the NCS developed by the nation, the ideal job performing abilities are specified, therefore there are weaknesses of not being able to reflect the actual operational problem differences in the student level between universities, problems of securing equipment and professors, and problems in the number of current curricula. For soft landing to practical curriculum, the process of clearly analyzing the gap between the current curriculum and the NCS must be preceded. Gap analysis is the initial stage methodology to reorganize the existing curriculum into NCS based curriculum, and based on the ability unit elements and performance standards for each NCS ability unit, the discrepancy between the existing curriculum within the department or the level of coincidence used a Likert scale of 1 to 5 to fill in and analyze. Thus, the universities wishing to operate NCS in the future measuring the level of coincidence and the gap between the current university curriculum and NCS can secure the basic tool to verify the applicability of NCS and the effectiveness of further development and operation. The advantages of reorganizing the curriculum through gap analysis are, first, that the government financial support project can be connected to provide quantitative index of the NCS adoption rate for each qualitative department, and, second, an objective standard is provided on the insufficiency or sufficiency when reorganizing to NCS based curriculum. In other words, when introducing in the subdivisions of the relevant NCS, the insufficient ability units and the ability unit elements can be extracted, and the supplementary matters for each ability unit element per existing subject can be extracted at the same time. There is an advantage providing directions for detailed class program and basic subject opening. The Ministry of Education and the Ministry of Employment and Labor must gather people from the industry to actively develop and supply the NCS standard a practical level to systematically reflect the requirements of the industrial field the educational training and qualification, and the universities wishing to apply NCS must reorganize the curriculum connecting work and qualification based on NCS. To enable this, the universities must consider the relevant industrial prospect and the relation between the faculty resources within the university and the local industry to clearly select the NCS subdivision to be applied. Afterwards, gap analysis must be used for the NCS based curriculum reorganization to establish the direction of the reorganization more objectively and rationally in order to participate in the process evaluation type qualification system efficiently.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Radiotherapy in Supraglottic Carcinoma - With Respect to Locoregional Control and Survival - (성문상부암의 방사선치료 -국소종양 제어율과 생존율을 중심으로-)

  • Nam Taek-Keun;Chung Woong-Ki;Cho Jae-Shik;Ahn Sung-Ja;Nah Byung-Sik;Oh Yoon-Kyeong
    • Radiation Oncology Journal
    • /
    • v.20 no.2
    • /
    • pp.108-115
    • /
    • 2002
  • Purpose : A retrospective study was undertaken to determine the role of conventional radiotherapy with or without surgery for treating a supraglottic carcinoma in terms of the local control and survival. Materials and Methods : From Jan. 1986 to Oct. 1996, a total of 134 patients were treated for a supraglottic carcinoma by radiotherapy with or without surgery. Of them, 117 patients who had completed the radiotherapy formed the base of this study. The patients were redistributed according to the revised AJCC staging system (1997). The number of patients of stage I, II, III, IVA, IVB were $6\;(5\%),\;16\;(14\%),\;53\;(45\%),\;32\;(27\%),\;10\;(9\%)$, respectively. Eighty patients were treated by radical radiotherapy in the range of $61.2\~79.2\;Gy$ (mean : 69.2 Gy) to the primary tumor and $45.0\~93.6\;Gy$ (mean : 54.0 Gy) to regional lymphatics. All patients with stage I and IVB were treated by radiotherapy alone. Thirty-seven patients underwent surgery plus postoperative radiotherapy in the range of $45.0\~68.4\;Gy$ (mean : 56.1 Gy) to the primary tumor bed and $45.0\~59.4\;Gy$ (mean : 47.2 Gy) to the regional lymphatics. Of them, 33 patients received a total laryngectomy (${\pm}lymph$ node dissection), three had a supraglottic horizontal laryngectomy (${\pm}lymph$ node dissection), and one had a primary excision alone. Results : The 5-year survival rate (5YSR) of all patients was $43\%$. The 5YSRs of the patients with stage I+II, III+IV were $49.9\%,\;41.2\%$, respectively (p=0.27). However, the disease-specific survival rate of the patients with stage I (n=6) was $100\%$. The 5YSRs of patients who underwent surgery plus radiotherapy (S+RT) vs radiotherapy alone (RT) in stage II, III, IVA were $100\%\;vs\;43\%$ (p=0.17), $62\%\;vs\;52\%$ (p=0.32), $58\%\;vs\;6\%$ (p<0.001), respectively. The 5-year actuarial locoregional control rate (5YLCR) of all the patients was $57\%$. The 5YLCR of the patients with stage I, II, III, IVA, IVB was $100\%,\;74\%,\;60\%,\;44\%,\;30\%$, respectively (p=0.008). The 5YLCR of the patients with S+RT vs RT in stage II, III, IVA was $100\%\;vs\;68\%$ (p=0.29), $67\%\;vs\;55\%$ (p=0.23), $81\%\;vs\;20\%$ (p<0.001), respectively. In the radiotherapy alone group, the 5YLCR of the patients with a complete, partial, and minimal response were $76\%,\;20\%,\;0\%$, respectively (p<0.001). In all patients, multivariate analysis showed that the N-stage, surgery or not, and age were significant factors affecting the survival rate and that the N-stage, surgery or not, and the ECOG performance index were significant factors affecting the locoregional control. In the radiotherapy alone group, multivariate analysis showed that the radiation response and N-stage were significant factors affecting the overall survival rate as well as locoregional control. Conclusion : In early stage supraglottic carcinoma, conventional radiotherapy alone is an equally effective modality compared to surgery plus radiotherapy and could preserve the laryngeal function. However, in the advanced stages, radiotherapy combined with concurrent chemotherapy for laryngeal preservation or surgery should be considered. In bulky neck disease, all the possible planned neck dissections after induction chemotherapy or before radiotherapy should be attempted.

Derivation of Digital Music's Ranking Change Through Time Series Clustering (시계열 군집분석을 통한 디지털 음원의 순위 변화 패턴 분류)

  • Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.171-191
    • /
    • 2020
  • This study focused on digital music, which is the most valuable cultural asset in the modern society and occupies a particularly important position in the flow of the Korean Wave. Digital music was collected based on the "Gaon Chart," a well-established music chart in Korea. Through this, the changes in the ranking of the music that entered the chart for 73 weeks were collected. Afterwards, patterns with similar characteristics were derived through time series cluster analysis. Then, a descriptive analysis was performed on the notable features of each pattern. The research process suggested by this study is as follows. First, in the data collection process, time series data was collected to check the ranking change of digital music. Subsequently, in the data processing stage, the collected data was matched with the rankings over time, and the music title and artist name were processed. Each analysis is then sequentially performed in two stages consisting of exploratory analysis and explanatory analysis. First, the data collection period was limited to the period before 'the music bulk buying phenomenon', a reliability issue related to music ranking in Korea. Specifically, it is 73 weeks starting from December 31, 2017 to January 06, 2018 as the first week, and from May 19, 2019 to May 25, 2019. And the analysis targets were limited to digital music released in Korea. In particular, digital music was collected based on the "Gaon Chart", a well-known music chart in Korea. Unlike private music charts that are being serviced in Korea, Gaon Charts are charts approved by government agencies and have basic reliability. Therefore, it can be considered that it has more public confidence than the ranking information provided by other services. The contents of the collected data are as follows. Data on the period and ranking, the name of the music, the name of the artist, the name of the album, the Gaon index, the production company, and the distribution company were collected for the music that entered the top 100 on the music chart within the collection period. Through data collection, 7,300 music, which were included in the top 100 on the music chart, were identified for a total of 73 weeks. On the other hand, in the case of digital music, since the cases included in the music chart for more than two weeks are frequent, the duplication of music is removed through the pre-processing process. For duplicate music, the number and location of the duplicated music were checked through the duplicate check function, and then deleted to form data for analysis. Through this, a list of 742 unique music for analysis among the 7,300-music data in advance was secured. A total of 742 songs were secured through previous data collection and pre-processing. In addition, a total of 16 patterns were derived through time series cluster analysis on the ranking change. Based on the patterns derived after that, two representative patterns were identified: 'Steady Seller' and 'One-Hit Wonder'. Furthermore, the two patterns were subdivided into five patterns in consideration of the survival period of the music and the music ranking. The important characteristics of each pattern are as follows. First, the artist's superstar effect and bandwagon effect were strong in the one-hit wonder-type pattern. Therefore, when consumers choose a digital music, they are strongly influenced by the superstar effect and the bandwagon effect. Second, through the Steady Seller pattern, we confirmed the music that have been chosen by consumers for a very long time. In addition, we checked the patterns of the most selected music through consumer needs. Contrary to popular belief, the steady seller: mid-term pattern, not the one-hit wonder pattern, received the most choices from consumers. Particularly noteworthy is that the 'Climbing the Chart' phenomenon, which is contrary to the existing pattern, was confirmed through the steady-seller pattern. This study focuses on the change in the ranking of music over time, a field that has been relatively alienated centering on digital music. In addition, a new approach to music research was attempted by subdividing the pattern of ranking change rather than predicting the success and ranking of music.