• Title/Summary/Keyword: University Evaluation

Search Result 44,599, Processing Time 0.084 seconds

Assessment Study on Educational Programs for the Gifted Students in Mathematics (영재학급에서의 수학영재프로그램 평가에 관한 연구)

  • Kim, Jung-Hyun;Whang, Woo-Hyung
    • Communications of Mathematical Education
    • /
    • v.24 no.1
    • /
    • pp.235-257
    • /
    • 2010
  • Contemporary belief is that the creative talented can create new knowledge and lead national development, so lots of countries in the world have interest in Gifted Education. As we well know, U.S.A., England, Russia, Germany, Australia, Israel, and Singapore enforce related laws in Gifted Education to offer Gifted Classes, and our government has also created an Improvement Act in January, 2000 and Enforcement Ordinance for Gifted Improvement Act was also announced in April, 2002. Through this initiation Gifted Education can be possible. Enforcement Ordinance was revised in October, 2008. The main purpose of this revision was to expand the opportunity of Gifted Education to students with special education needs. One of these programs is, the opportunity of Gifted Education to be offered to lots of the Gifted by establishing Special Classes at each school. Also, it is important that the quality of Gifted Education should be combined with the expansion of opportunity for the Gifted. Social opinion is that it will be reckless only to expand the opportunity for the Gifted Education, therefore, assessment on the Teaching and Learning Program for the Gifted is indispensible. In this study, 3 middle schools were selected for the Teaching and Learning Programs in mathematics. Each 1st Grade was reviewed and analyzed through comparative tables between Regular and Gifted Education Programs. Also reviewed was the content of what should be taught, and programs were evaluated on assessment standards which were revised and modified from the present teaching and learning programs in mathematics. Below, research issues were set up to assess the formation of content areas and appropriateness for Teaching and Learning Programs for the Gifted in mathematics. A. Is the formation of special class content areas complying with the 7th national curriculum? 1. Which content areas of regular curriculum is applied in this program? 2. Among Enrichment and Selection in Curriculum for the Gifted, which one is applied in this programs? 3. Are the content areas organized and performed properly? B. Are the Programs for the Gifted appropriate? 1. Are the Educational goals of the Programs aligned with that of Gifted Education in mathematics? 2. Does the content of each program reflect characteristics of mathematical Gifted students and express their mathematical talents? 3. Are Teaching and Learning models and methods diverse enough to express their talents? 4. Can the assessment on each program reflect the Learning goals and content, and enhance Gifted students' thinking ability? The conclusions are as follows: First, the best contents to be taught to the mathematical Gifted were found to be the Numeration, Arithmetic, Geometry, Measurement, Probability, Statistics, Letter and Expression. Also, Enrichment area and Selection area within the curriculum for the Gifted were offered in many ways so that their Giftedness could be fully enhanced. Second, the educational goals of Teaching and Learning Programs for the mathematical Gifted students were in accordance with the directions of mathematical education and philosophy. Also, it reflected that their research ability was successful in reaching the educational goals of improving creativity, thinking ability, problem-solving ability, all of which are required in the set curriculum. In order to accomplish the goals, visualization, symbolization, phasing and exploring strategies were used effectively. Many different of lecturing types, cooperative learning, discovery learning were applied to accomplish the Teaching and Learning model goals. For Teaching and Learning activities, various strategies and models were used to express the students' talents. These activities included experiments, exploration, application, estimation, guess, discussion (conjecture and refutation) reconsideration and so on. There were no mention to the students about evaluation and paper exams. While the program activities were being performed, educational goals and assessment methods were reflected, that is, products, performance assessment, and portfolio were mainly used rather than just paper assessment.

Principal Characteristics of Pinus parviflora S. et Z. Native to the Dagelet Island (울릉도(鬱陵島) 섬잣나무의 특성(特性)에 관(關)한 연구(硏究))

  • Ahn, Kun Yong
    • Journal of Korean Society of Forest Science
    • /
    • v.12 no.1
    • /
    • pp.31-43
    • /
    • 1971
  • In order to examine the taxonomic difference between the type of Pinus parviflora S. et Z. native to the Dagelet Island and the type of the species introduced to a number of places of the inland of South Korea, investigations on principal characters of needle, cone and seed were made with a hope to obtain informations on the evaluation of the species for possible use in the reforestation program in Korea in the future. Pinus parviflora is belonged to the Sub-genus Haploxylon of Genus Pinus and it has been speculated among dendrologists that this speoies is not monotypic. 308 rendomly selected trees from 8 different elevations of a natural stand of P. parviflora in the Dagelet Island, and 168 trees of P. parviflora growing at 15 different locations of the inland of South Korea were employed as samples along with 300 trees of P. koraiensis as control. The results obtained are summarized as follows: 1. The needle length of the Pinus parviflora of the Dagelet Island is longer than that of the species growing in the inland by 21-35 percent with statistical significancy. (Table 2) 2. In the cross section of needle, no resin canal was observed in about 50-70 percent of the sample trees of the Dagelet Island, whereas the resin canals appearing at external in most cases were observed in all sample trees from the inland. Consequently, the number of resin canals per needle was 0.4-0.9 with the Dagelet Island type and 2.0-2.7 with the inland type and these differences were statistically significant. (Table 3, Fig.2) 3. The Pinus parvviflora type of the Dagelet Island bas yellowish brown cones, and the Pinus parviflora type of Suwon and Kwangyang has redish brown cones. In both the length of cone and the number of cone scale, the difference between the type of the Dagelet Island and the type of the inland was also statistically significant. The cone scales of the Dagelet Island type are slightly opened, whereas the cone scales are widely opened with both of Suwon and Kwangyang type. (Table 4, Fig. 3) 4. the seed color, of the Dagelet Island type is yellowish brown, while it is greyish brown with Kwangyang and Suwon type. In the length and width of seed, the Dagelet Island type showed significantly larger values than that of the inland type. The length of seed was longest with the Kwangyang type being followed by Suwon and the Dagelet Island type in ordar. The seed wing of the Kwangyang type are longer than the seed, while that of the Dagelet Island type is degenerated to be shorter than the seed. (Table 5, Fig. 4) 5. The Pinus parviflora type of the Dagelat Island is similar in many respects to the southern type of Pinus parviflora of Japan except that many has no resin canals in the needle. 6. On the basis of the results obtained in this study, it may be concluded that the type of Pinus parviflora of the Dagelet Island is significantly different from the type of the species introduced to the inland and that there is no recognizable variation between the population of the different altitude of the Dagelet Island and the individual variation within population is also negligible. In the light of the high value of the tree not only as an ornamental tree but as an economical tree, The type of Pinus parviflora of the Dagelet Island is considered to be recommendable to be used for the future reforestation program of Korea.

  • PDF

Genetic Diversity of Korean Native Chicken Populations in DAD-IS Database Using 25 Microsatellite Markers (초위성체 마커를 활용한 가축다양성정보시스템(DAD-IS) 등재 재래닭 집단의 유전적 다양성 분석)

  • Roh, Hee-Jong;Kim, Kwan-Woo;Lee, Jinwook;Jeon, Dayeon;Kim, Seung-Chang;Ko, Yeoung-Gyu;Mun, Seong-Sil;Lee, Hyun-Jung;Lee, Jun-Heon;Oh, Dong-Yep;Byeon, Jae-Hyun;Cho, Chang-Yeon
    • Korean Journal of Poultry Science
    • /
    • v.46 no.2
    • /
    • pp.65-75
    • /
    • 2019
  • A number of Korean native chicken(KNC) populations were registered in FAO (Food and Agriculture Organization) DAD-IS (Domestic Animal Diversity Information Systems, http://www.fao.org/dad-is). But there is a lack of scientific basis to prove that they are unique population of Korea. For this reason, this study was conducted to prove KNC's uniqueness using 25 Microsatellite markers. A total of 548 chickens from 11 KNC populations (KNG, KNB, KNR, KNW, KNY, KNO, HIC, HYD, HBC, JJC, LTC) and 7 introduced populations (ARA: Araucana, RRC and RRD: Rhode Island Red C and D, LGF and LGK: White Leghorn F and K, COS and COH: Cornish brown and Cornish black) were used. Allele size per locus was decided using GeneMapper Software (v 5.0). A total of 195 alleles were observed and the range was 3 to 14 per locus. The MNA, $H_{\exp}$, $H_{obs}$, PIC value within population were the highest in KNY (4.60, 0.627, 0.648, 0.563 respectively) and the lowest in HYD (1.84, 0.297, 0.286, 0.236 respectively). The results of genetic uniformity analysis suggested 15 cluster (${\Delta}K=66.22$). Excluding JJC, the others were grouped in certain cluster with high genetic uniformity. JJC was not grouped in certain cluster but grouped in cluster 2 (44.3%), cluster 3 (17.7%) and cluster8 (19.1%). As a results of this study, we can secure a scientific basis about KNC's uniqueness and these results can be use to basic data for the genetic evaluation and management of KNC breeds.

A Study on the Seawater Filtration Characteristics of Single and Dual-filter Layer Well by Field Test (현장실증시험에 의한 단일 및 이중필터층 우물의 해수 여과 특성 연구)

  • Song, Jae-Yong;Lee, Sang-Moo;Kang, Byeong-Cheon;Lee, Geun-Chun;Jeong, Gyo-Cheol
    • The Journal of Engineering Geology
    • /
    • v.29 no.1
    • /
    • pp.51-68
    • /
    • 2019
  • This study performs to evaluate adaptability of seashore filtering type seawater-intake which adapts dua1 filter well alternative for direct seawater-intake. This study varies filter condition of seashore free surface aquifer which is composed of sand layer then installs real size dual filter well and single filter well to evaluate water permeability and proper pumping amount according to filter condition. According to result of step aquifer test, it is analysed that 110.3% synergy effect of water permeability coefficient is happened compare to single filter since dual filter well has better improvement. dual filter has higher water permeability coefficient compare to same pumping amount, this means dual filter has more improved water permeability than single filter. According to analysis result of continuous aquifer test, it is evaluated that dual filter well (SD1200) has higher water permeability than single filter well (SS800) by analysis of water permeability coefficient using monitoring well and gauging well, it is also analysed dual filter has 110.7% synergy effect of water permeability coefficient. As a evaluation result of pumping amount according to analysis of water level dropping rate, it is analysed that dual filter well increased 122.8% pumping amount compare to single filter well when water level dropping is 2.0 m. As a result of calculating proper pumping amount using water level dropping rate, it is analysed that dual filter well shows 136.0% higher pumping amount compare to single filter well. It is evaluated that proper pumping amount has 122.8~160% improvement compare to single filter, pumping amount improvement rate is 139.6% compare to averaged single filter. In other words, about 40% water intake efficiency can be improved by just installation of dual filter compare to normal well. Proper pumping amount of dual filter well using inflection point is 2843.3 L/min and it is evaluated that daily seawater intake amount is about $4,100m^3/day$ (${\fallingdotseq}4094.3m^3/day$) in one hole of dual filter well. Since it is possible to intake plenty of water in one hole, higher adaptability is anticipated. In case of intaking seawater using dual filter well, no worries regarding damages on facilities caused by natural disaster such as severe weather or typhoon, improvement of pollution is anticipated due to seashore sand layer acts like filter. Therefore, It can be alternative of environmental issue for existing seawater intake technique, can save maintenance expenses related to installation fee or damages and has excellent adaptability in economic aspect. The result of this study will be utilized as a basic data of site demonstration test for adaptation of riverside filtered water of upcoming dual filter well and this study is also anticipated to present standard of well design and construction related to riverside filter and seashore filter technique.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

A Study on the Evaluation of Fertilizer Loss in the Drainage(Waste) Water of Hydroponic Cultivation, Korea (수경재배 유출 배액(폐양액)의 비료 손실량 평가 연구)

  • Jinkwan Son;Sungwook Yun;Jinkyung Kwon;Jihoon Shin;Donghyeon Kang;Minjung Park;Ryugap Lim
    • Journal of Wetlands Research
    • /
    • v.25 no.1
    • /
    • pp.35-47
    • /
    • 2023
  • Korean facility horticulture and hydroponic cultivation methods increase, requiring the management of waste water generated. In this study, the amount of fertilizer contained in the discharged waste liquid was determined. By evaluating this as a price, it was suggested to reduce water treatment costs and recycle fertilizer components. It was evaluated based on the results of major water quality analysis of waste liquid by crop, such as tomatoes, paprika, cucumbers, and strawberries, and in the case of P component, it was analyzed by converting it to the amount of phosphoric acid (P2O5). The amount of nitrogen (N) can be calculated by discharging 1,145.90kg·ha-1 of tomatoes, 920.43kg·ha-1 of paprika, 804.16kg·ha-1 of cucumbers, 405.83kg·ha-1 of strawberries, and the fertilizer content of P2O5 is 830.65kg·ha-1 of paprika, 622.32kg·ha-1 of tomatoes, 477.67kg·ha-1 of cucumbers. In addition, trace elements such as potassium (K), calcium (Ca), magnesium (Mg), iron (Fe), and manganese (Mn) were also analyzed to be emitted. The price per kg of each item calculated by averaging the price of fertilizer sold on the market can be evaluated as KRW, N 860.7, P 2,378.2, K 2,121.7, Ca 981.2, Mg 1,036.3, Fe 126,076.9, Mn 62,322.1, Zn 15,825.0, Cu 31,362.0, B 4,238.0, Mo 149,041.7. The annual fertilizer loss amount for each crop was calculated by comprehensively considering the price per kg calculated based on the market price of fertilizer, the concentration of waste by crop analyzed earlier, and the average annual emission of hydroponic cultivation. As a result of the analysis, the average of the four hydroponic crops was 5,475,361.1 won in fertilizer ingredients, with tomatoes valued at 6,995,622.3 won, paprika valued at 7,384,923.8 won, cucumbers valued at 5,091,607.9 won, and strawberries valued at 2,429,290.6 won. It was expected that if hydroponic drainage is managed through self-treatment or threshing before discharge rather than by leaking it into a river and treating it as a pollutant, it can be a valuable reusable fertilizer ingredient along with reducing water treatment costs.

Legal Issues on the Collection and Utilization of Infectious Disease Data in the Infectious Disease Crisis (감염병 위기 상황에서 감염병 데이터의 수집 및 활용에 관한 법적 쟁점 -미국 감염병 데이터 수집 및 활용 절차를 참조 사례로 하여-)

  • Kim, Jae Sun
    • The Korean Society of Law and Medicine
    • /
    • v.23 no.4
    • /
    • pp.29-74
    • /
    • 2022
  • As social disasters occur under the Disaster Management Act, which can damage the people's "life, body, and property" due to the rapid spread and spread of unexpected COVID-19 infectious diseases in 2020, information collected through inspection and reporting of infectious disease pathogens (Article 11), epidemiological investigation (Article 18), epidemiological investigation for vaccination (Article 29), artificial technology, and prevention policy Decision), (3) It was used as an important basis for decision-making in the context of an infectious disease crisis, such as promoting vaccination and understanding the current status of damage. In addition, medical policy decisions using infectious disease data contribute to quarantine policy decisions, information provision, drug development, and research technology development, and interest in the legal scope and limitations of using infectious disease data has increased worldwide. The use of infectious disease data can be classified for the purpose of spreading and blocking infectious diseases, prevention, management, and treatment of infectious diseases, and the use of information will be more widely made in the context of an infectious disease crisis. In particular, as the serious stage of the Disaster Management Act continues, the processing of personal identification information and sensitive information becomes an important issue. Information on "medical records, vaccination drugs, vaccination, underlying diseases, health rankings, long-term care recognition grades, pregnancy, etc." needs to be interpreted. In the case of "prevention, management, and treatment of infectious diseases", it is difficult to clearly define the concept of medical practicesThe types of actions are judged based on "legislative purposes, academic principles, expertise, and social norms," but the balance of legal interests should be based on the need for data use in quarantine policies and urgent judgment in public health crises. Specifically, the speed and degree of transmission of infectious diseases in a crisis, whether the purpose can be achieved without processing sensitive information, whether it unfairly violates the interests of third parties or information subjects, and the effectiveness of introducing quarantine policies through processing sensitive information can be used as major evaluation factors. On the other hand, the collection, provision, and use of infectious disease data for research purposes will be used through pseudonym processing under the Personal Information Protection Act, consent under the Bioethics Act and deliberation by the Institutional Bioethics Committee, and data provision deliberation committee. Therefore, the use of research purposes is recognized as long as procedural validity is secured as it is reviewed by the pseudonym processing and data review committee, the consent of the information subject, and the institutional bioethics review committee. However, the burden on research managers should be reduced by clarifying the pseudonymization or anonymization procedures, the introduction or consent procedures of the comprehensive consent system and the opt-out system should be clearly prepared, and the procedure for re-identifying or securing security that may arise from technological development should be clearly defined.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Emoticon by Emotions: The Development of an Emoticon Recommendation System Based on Consumer Emotions (Emoticon by Emotions: 소비자 감성 기반 이모티콘 추천 시스템 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.227-252
    • /
    • 2018
  • The evolution of instant communication has mirrored the development of the Internet and messenger applications are among the most representative manifestations of instant communication technologies. In messenger applications, senders use emoticons to supplement the emotions conveyed in the text of their messages. The fact that communication via messenger applications is not face-to-face makes it difficult for senders to communicate their emotions to message recipients. Emoticons have long been used as symbols that indicate the moods of speakers. However, at present, emoticon-use is evolving into a means of conveying the psychological states of consumers who want to express individual characteristics and personality quirks while communicating their emotions to others. The fact that companies like KakaoTalk, Line, Apple, etc. have begun conducting emoticon business and sales of related content are expected to gradually increase testifies to the significance of this phenomenon. Nevertheless, despite the development of emoticons themselves and the growth of the emoticon market, no suitable emoticon recommendation system has yet been developed. Even KakaoTalk, a messenger application that commands more than 90% of domestic market share in South Korea, just grouped in to popularity, most recent, or brief category. This means consumers face the inconvenience of constantly scrolling around to locate the emoticons they want. The creation of an emoticon recommendation system would improve consumer convenience and satisfaction and increase the sales revenue of companies the sell emoticons. To recommend appropriate emoticons, it is necessary to quantify the emotions that the consumer sees and emotions. Such quantification will enable us to analyze the characteristics and emotions felt by consumers who used similar emoticons, which, in turn, will facilitate our emoticon recommendations for consumers. One way to quantify emoticons use is metadata-ization. Metadata-ization is a means of structuring or organizing unstructured and semi-structured data to extract meaning. By structuring unstructured emoticon data through metadata-ization, we can easily classify emoticons based on the emotions consumers want to express. To determine emoticons' precise emotions, we had to consider sub-detail expressions-not only the seven common emotional adjectives but also the metaphorical expressions that appear only in South Korean proved by previous studies related to emotion focusing on the emoticon's characteristics. We therefore collected the sub-detail expressions of emotion based on the "Shape", "Color" and "Adumbration". Moreover, to design a highly accurate recommendation system, we considered both emotion-technical indexes and emoticon-emotional indexes. We then identified 14 features of emoticon-technical indexes and selected 36 emotional adjectives. The 36 emotional adjectives consisted of contrasting adjectives, which we reduced to 18, and we measured the 18 emotional adjectives using 40 emoticon sets randomly selected from the top-ranked emoticons in the KakaoTalk shop. We surveyed 277 consumers in their mid-twenties who had experience purchasing emoticons; we recruited them online and asked them to evaluate five different emoticon sets. After data acquisition, we conducted a factor analysis of emoticon-emotional factors. We extracted four factors that we named "Comic", Softness", "Modernity" and "Transparency". We analyzed both the relationship between indexes and consumer attitude and the relationship between emoticon-technical indexes and emoticon-emotional factors. Through this process, we confirmed that the emoticon-technical indexes did not directly affect consumer attitudes but had a mediating effect on consumer attitudes through emoticon-emotional factors. The results of the analysis revealed the mechanism consumers use to evaluate emoticons; the results also showed that consumers' emoticon-technical indexes affected emoticon-emotional factors and that the emoticon-emotional factors affected consumer satisfaction. We therefore designed the emoticon recommendation system using only four emoticon-emotional factors; we created a recommendation method to calculate the Euclidean distance from each factors' emotion. In an attempt to increase the accuracy of the emoticon recommendation system, we compared the emotional patterns of selected emoticons with the recommended emoticons. The emotional patterns corresponded in principle. We verified the emoticon recommendation system by testing prediction accuracy; the predictions were 81.02% accurate in the first result, 76.64% accurate in the second, and 81.63% accurate in the third. This study developed a methodology that can be used in various fields academically and practically. We expect that the novel emoticon recommendation system we designed will increase emoticon sales for companies who conduct business in this domain and make consumer experiences more convenient. In addition, this study served as an important first step in the development of an intelligent emoticon recommendation system. The emotional factors proposed in this study could be collected in an emotional library that could serve as an emotion index for evaluation when new emoticons are released. Moreover, by combining the accumulated emotional library with company sales data, sales information, and consumer data, companies could develop hybrid recommendation systems that would bolster convenience for consumers and serve as intellectual assets that companies could strategically deploy.