• Title/Summary/Keyword: Work value

Search Result 3,503, Processing Time 0.031 seconds

Analysis of the Effect of the Etching Process and Ion Injection Process in the Unit Process for the Development of High Voltage Power Semiconductor Devices (고전압 전력반도체 소자 개발을 위한 단위공정에서 식각공정과 이온주입공정의 영향 분석)

  • Gyu Cheol Choi;KyungBeom Kim;Bonghwan Kim;Jong Min Kim;SangMok Chang
    • Clean Technology
    • /
    • v.29 no.4
    • /
    • pp.255-261
    • /
    • 2023
  • Power semiconductors are semiconductors used for power conversion, transformation, distribution, and control. Recently, the global demand for high-voltage power semiconductors is increasing across various industrial fields, and optimization research on high-voltage IGBT components is urgently needed in these industries. For high-voltage IGBT development, setting the resistance value of the wafer and optimizing key unit processes are major variables in the electrical characteristics of the finished chip. Furthermore, the securing process and optimization of the technology to support high breakdown voltage is also important. Etching is a process of transferring the pattern of the mask circuit in the photolithography process to the wafer and removing unnecessary parts at the bottom of the photoresist film. Ion implantation is a process of injecting impurities along with thermal diffusion technology into the wafer substrate during the semiconductor manufacturing process. This process helps achieve a certain conductivity. In this study, dry etching and wet etching were controlled during field ring etching, which is an important process for forming a ring structure that supports the 3.3 kV breakdown voltage of IGBT, in order to analyze four conditions and form a stable body junction depth to secure the breakdown voltage. The field ring ion implantation process was optimized based on the TEG design by dividing it into four conditions. The wet etching 1-step method was advantageous in terms of process and work efficiency, and the ring pattern ion implantation conditions showed a doping concentration of 9.0E13 and an energy of 120 keV. The p-ion implantation conditions were optimized at a doping concentration of 6.5E13 and an energy of 80 keV, and the p+ ion implantation conditions were optimized at a doping concentration of 3.0E15 and an energy of 160 keV.

A cohort study on blood zinc protoporphyrin concentration of workers in storage battery factory (축전지 공장 근로자들의 혈중 Zinc Protoporphyrin에 대한 코호트 연구)

  • Jeon, Man-Joong;Lee, Joong-Jeong;SaKong, Joon;Kim, Chang-Yoon;Kim, Jung-Man;Chung, Jong-Hak
    • Journal of Preventive Medicine and Public Health
    • /
    • v.31 no.1 s.60
    • /
    • pp.112-126
    • /
    • 1998
  • To investigate the effectiveness of the interventions in working environment and personal hygiene for the occupational exposure to the lead, the blood zinc protoporphyrin (ZPP) concentrations of 131 workers (100 exposed subjects and 31 controls) of a newly established battery factory were analyzed. They were measured in every 3 months up to 18 months. Ai. lead concentration (Pb-A) of the workplaces was also checked for 3 times in 6 months interval from August 1987. Environmental intervention included the local exhaust ventilation and vacuum cleaning of the floor. Intervention of the personal hygiene included the daily change of clothes, compulsory shower after work and hand washing before meal, prohibition of cigarette smoking and food consumption at the work site and wearing mask. Mean blood ZPP concentration of the controls was $16.45{\pm}4.83{\mu}g/d\ell$ at the preemployment examination and slightly increased to $17.77{\pm}5.59{\mu}g/d\ell$ after 6 months. Mean blood ZPP concentration of the exposed subjects who were employed before the factory was in operation (Group A) was $17.36{\pm}5.20{\mu}g/d\ell$ on employment and it was increased to $23.00{\pm}13.06{\mu}g/d\ell$ after 3 months. The blood ZPP concentration was increased to $27.25{\pm}6.40{\mu}g/d\ell$ on 6 months (p<0.01) after the employment which was 1 month after the initiation of intervention program. It did not increase thereafter and ranged between $25.48{\mu}g/d\ell$ and $26.61{\mu}g/d\ell$ in the subsequent 4 results. Mean blood ZPP concentration of the exposed subjects who were employed after the factory had been in operation but before the intervention program was initiated (Group B) was $14.34{\pm}6.10{\mu}g/d\ell$ on employment and it was increased to $28.97{\pm}7.14{\mu}g/d\ell$ (p<0.01) in 3 months later(1 month after the intervention). The values of subsequent 4 tests were maintained between $26.96{\mu}g/d\ell$and $27.96{\mu}g/d\ell$. Mean blood ZPP concentration of the exposed subjects who were employed after intervention program had been started (Group C) was$21.34{\pm}5.25{\mu}g/d\ell$ on employment and it was gradually increased to $23.37{\pm}3.86{\mu}g/d\ell$ (p<0.01) after 3 months, $23.93{\pm}3.64{\mu}g/d\ell$ after 6 months, $25.50{\pm}3.01{\mu}g/d\ell$ after 9 months, and $25.50{\pm}3.10{\mu}g/d\ell$ after 12 months. Workplaces were classified into 4 parts according to Pb-A. The Pb-A of part I, the highest areas, were $0.365mg/m^3$, and after the intervention the levels were decreased to $0.216mg/m^3$ and$0.208mg/m^3$ in follow-up test. The Pb-A of part II which was resulted in lowe. value than part I was decreased from $0.232mg/m^3$ to $0.148mg/m^3$, and $0.120mg/m^3$ after the intervention. The Pb-A of part III was tested after the intervention and resulted in $0.124mg/m^3$ in January 1988 and $0.181mg/m^3$ in August 1988. The Pb-A of part IV was also tested after the intervention and resulted in $0.110mg/m^3$ in August 1988. There was no consistent relationship between Pb-A and blood ZPP concentration. The blood ZPP concentration of the group A and B workers in the part of the highest Pb-A were lower than those of the workers in the parts of lower Pb-A. The blood ZPP concentration of the workers in the part of the lowest Pb-A increased more rapidly. The blood ZPP concentration of the group C workers was the highest in part III. These findings suggest that the intervention in personal hygiene is more effective than environmental intervention, and it should be carried out from the first day of employment and to both the exposed subjects, blue color workers and the controls, white color workers.

  • PDF

The Analysis of Radiation Exposure of Hospital Radiation Workers (병원 방사선 작업 종사자의 방사선 피폭 분석 현황)

  • Jeong Tae Sik;Shin Byung Chul;Moon Chang Woo;Cho Yeong Duk;Lee Yong Hwan;Yum Ha Yong
    • Radiation Oncology Journal
    • /
    • v.18 no.2
    • /
    • pp.157-166
    • /
    • 2000
  • Purpose : This investigation was peformed in order to improve the health care of radiation workers, to predict a risk, to minimize the radiation exposure hazard to them and for them to realize radiation exposure danger when they work in radiation area in hospital. Methods and Materials : The documentations checked regularly for personal radiation exposure in four university hospitals in Pusan city in Korea between January 1, 1993 and December 31, 1997 were analyzed. There were 458 persons in this documented but 111 persons who worked less then one year were excluded and only 347 persons were included in this study. Results : The average of yearly radiation exposure of 347 persons was 1.52$\pm$1.35 mSv. Though it was less than 50mSv, the limitaion of radiation in law but 125 (36%) people received higher radiation exposure than non-radiation workers. Radiation workers under 30 year old have received radiation exposure of mean 1.87$\pm$1.01 mSv/year, mean 1.22$\pm$0.69 mSv between 31 and 40 year old and mean 0.97$\pm$0.43 mSv/year over 41year old (p<0.001). Men received mean 1.67$\pm$1.54 mSv/year were higher than women who received mean 1.13$\pm$0.61 mSv/year (p<0.01). Radiation exposure in the department of nuclear modicine department in spite of low energy sources is higher than other departments that use radiations in hospital (p<0.05). And the workers who received mean 3.59$\pm$1.81 msv/year in parts of management of radiation sources and injection of sources to patient receive high radiation exposure in nuclear medicine department (p<0.01). In department of diagnostic radiology high radiation exposure is in barium enema rooms where workers received mean 3.74$\pm$1.74 mSv/year and other parts where they all use fluoroscopy such as angiography room of mean 1.17$\pm$0.35 mSv/year and upper gastrointestinal room of mean 1.74$\pm$1.34 mSv/year represented higher radiation exposure than average radiation exposure in diagnostic radiology (p<0.01). Doctors and radiation technologists received higher radiation exposure of each mean 1.75$\pm$1.17 mSv/year and mean 1.50$\pm$1.39 mSv/year than other people who work in radiation area in hospital (p<0.05). Especially young doctors and technologists have the high opportunity to receive higher radiation exposure. Conclusions : The training and education of radiation workers for radiation exposure risks are important and it is necessary to rotate worker in short period in high risk area. The hospital management has to concern health of radiation workers more and to put an effort to reduce radiation exposure as low as possible in radiation areas in hospital.

  • PDF

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

The Study in Objectification of the diagnosis of Sasang Constitution(According to Analysis of the Past Questionnaires) (사상체질진단(四象體質診斷)의 객관화(客觀化)에 관한 연구(硏究)(기존(旣存) 설문지(說問紙)의 분석(分析)을 중심(中心)으로))

  • Kim, Young-woo;Kim, Jong-won
    • Journal of Sasang Constitutional Medicine
    • /
    • v.11 no.2
    • /
    • pp.151-183
    • /
    • 1999
  • The object of this study was 200 patients who had been treated in the Oriental Medical Hospital at Dong Eui Medical Center during 9 months from Jan. 1999 to sept. 1999. We proceeded the judgment of Sasang Constitution according to 'Questionnaire of Sasang Constitution Classification (I)', and 'Questionnaire of Sasang Constitution Classification II(QSCCII)' and the diagnosis by a medical specialist. The following conclusion were made in comparison with Sasang Constitution and Questionnaire. 1. We selected the 84 subjects what had the statistical value out of the 196 subjects('Questionnaire of Sasang Constitution Classification (I)' had the 71 subjects and 'Questionnaire of Sasang Constitution Classification II(QSCCII)', had the 121 subjects). And we selected again the 73 subjects('Questionnaire of Sasang Constitution Classification (I)', had the 33 subjects and 'Questionnaire of Sasang Constitution Classification II (QSCC II)' had the 40 subjects) out of the 84 subjects, because it had a repeated subjects. 2. We made the Questionnaire what has the 85 subjects, including the subjects what was approved its statistical value by 'A CLINICAL STUDY OF THE JUDGMENT OF SASANG CONSTITUTION ACCORDING TO QUESTIONNAIRE' and 'A CLINICAL STUDY OF THE TYPE OF DISEASE AND SYMPTOM ACCORDING TO SASANG CONSTITUTION CLASSIFICATION'. The subject what ask the physique and the body form was 7, the subject what ask the external appearance and the posture was 7, the subject what ask the habit and the character was 3, the subject what ask the physiology and the pathology was 3, the subject what ask the phenomenon that he has frequency was 4, the subject what ask the eating was 3, the subject what ask the symptom that he has frequency was 14, the subject what ask the work and the qualities-defects was 6, the subject what ask the friendly intercourse was 7, the subject what ask the usual mind was 5, the subject what ask the emotional inclination was I, the subject what ask the behavioral inclination was 10, the subject what ask the character was 15. 3. In the new Questionnaire, the subject what has relevance to Soyang was 84, the subject what has relevance to Soeum was 87, the subject what has relevance to Taeeum was 70. And we made the point of subject with the statistical ratio. The total point of Soyang was 7785.04, the total point of Soeum was 7742.80, the total point of Taeeum was 7746.60. 4. As a result of judgment of Sasang Constitution between the clinical diagnosis by a medical specialist and the new Questionnaire, the diagnostic accuracy of new Questionnaire was 73.33%. The diagnostic accuracy of Soyang was low, the others was high. And the Taeyang was excepted.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The Effectiveness of Fiscal Policies for R&D Investment (R&D 투자 촉진을 위한 재정지원정책의 효과분석)

  • Song, Jong-Guk;Kim, Hyuk-Joon
    • Journal of Technology Innovation
    • /
    • v.17 no.1
    • /
    • pp.1-48
    • /
    • 2009
  • Recently we have found some symptoms that R&D fiscal incentives might not work well what it has intended through the analysis of current statistics of firm's R&D data. Firstly, we found that the growth rate of R&D investment in private sector during the recent decade has been slowdown. The average of growth rate (real value) of R&D investment is 7.1% from 1998 to 2005, while it was 13.9% from 1980 to 1997. Secondly, the relative share of R&D investment of SME has been decreased to 21%('05) from 29%('01), even though the tax credit for SME has been more beneficial than large size firm, Thirdly, The R&D expenditure of large size firms (besides 3 leading firms) has not been increased since late of 1990s. We need to find some evidence whether fiscal incentives are effective in increasing firm's R&D investment. To analyse econometric model we use firm level unbalanced panel data for 4 years (from 2002 to 2005) derived from MOST database compiled from the annual survey, "Report on the Survey of Research and Development in Science and Technology". Also we use fixed effect model (Hausman test results accept fixed effect model with 1% of significant level) and estimate the model for all firms, large firms and SME respectively. We have following results from the analysis of econometric model. For large firm: i ) R&D investment responds elastically (1.20) to sales volume. ii) government R&D subsidy induces R&D investment (0.03) not so effectively. iii) Tax price elasticity is almost unity (-0.99). iv) For large firm tax incentive is more effective than R&D subsidy For SME: i ) Sales volume increase R&D investment of SME (0.043) not so effectively. ii ) government R&D subsidy is crowding out R&D investment of SME not seriously (-0.0079) iii) Tax price elasticity is very inelastic (-0.054) To compare with other studies, Koga(2003) has a similar result of tax price elasticity for Japanese firm (-1.0036), Hall((l992) has a unit tax price elasticity, Bloom et al. (2002) has $-0.354{\sim}-0.124$ in the short run. From the results of our analysis we recommend that government R&D subsidy has to focus on such an areas like basic research and public sector (defense, energy, health etc.) not overlapped private R&D sector. For SME government has to focus on establishing R&D infrastructure. To promote tax incentive policy, we need to strengthen the tax incentive scheme for large size firm's R&D investment. We recommend tax credit for large size film be extended to total volume of R&D investment.

  • PDF

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

Study on 3D Printer Suitable for Character Merchandise Production Training (캐릭터 상품 제작 교육에 적합한 3D프린터 연구)

  • Kwon, Dong-Hyun
    • Cartoon and Animation Studies
    • /
    • s.41
    • /
    • pp.455-486
    • /
    • 2015
  • The 3D printing technology, which started from the patent registration in 1986, was a technology that did not attract attention other than from some companies, due to the lack of awareness at the time. However, today, as expiring patents are appearing after the passage of 20 years, the price of 3D printers have decreased to the level of allowing purchase by individuals and the technology is attracting attention from industries, in addition to the general public, such as by naturally accepting 3D and to share 3D data, based on the generalization of online information exchange and improvement of computer performance. The production capability of 3D printers, which is based on digital data enabling digital transmission and revision and supplementation or production manufacturing not requiring molding, may provide a groundbreaking change to the process of manufacturing, and may attain the same effect in the character merchandise sector. Using a 3D printer is becoming a necessity in various figure merchandise productions which are in the forefront of the kidult culture that is recently gaining attention, and when predicting the demand by the industrial sites related to such character merchandise and when considering the more inexpensive price due to the expiration of patents and sharing of technology, expanding opportunities and sectors of employment and cultivating manpower that are able to engage in further creative work seems as a must, by introducing education courses cultivating manpower that can utilize 3D printers at the education field. However, there are limits in the information that can be obtained when seeking to introduce 3D printers in school education. Because the press or information media only mentions general information, such as the growth of the industrial size or prosperous future value of 3D printers, the research level of the academic world also remains at the level of organizing contents in an introductory level, such as by analyzing data on industrial size, analyzing the applicable scope in the industry, or introducing the printing technology. Such lack of information gives rise to problems at the education site. There would be no choice but to incur temporal and opportunity expenses, since the technology would only be able to be used after going through trials and errors, by first introducing the technology without examining the actual information, such as through comparing the strengths and weaknesses. In particular, if an expensive equipment introduced does not suit the features of school education, the loss costs would be significant. This research targeted general users without a technology-related basis, instead of specialists. By comparing the strengths and weaknesses and analyzing the problems and matters requiring notice upon use, pursuant to the representative technologies, instead of merely introducing the 3D printer technology as had been done previously, this research sought to explain the types of features that a 3D printer should have, in particular, when required in education relating to the development of figure merchandise as an optional cultural contents at cartoon-related departments, and sought to provide information that can be of practical help when seeking to provide education using 3D printers in the future. In the main body, the technologies were explained by making a classification based on a new perspective, such as the buttress method, types of materials, two-dimensional printing method, and three-dimensional printing method. The reason for selecting such different classification method was to easily allow mutual comparison of the practical problems upon use. In conclusion, the most suitable 3D printer was selected as the printer in the FDM method, which is comparatively cheap and requires low repair and maintenance cost and low materials expenses, although rather insufficient in the quality of outputs, and a recommendation was made, in addition, to select an entity that is supportive in providing technical support.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.