• Title/Summary/Keyword: Systems Interface

Search Result 3,027, Processing Time 0.03 seconds

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

Evaluation of Ovary Dose of Childbearing age Woman with Breast cancer in Radiation therapy (가임기 여성의 방사선 치료 시 난소 선량 평가)

  • Park, Sung Jun;Lee, Yeong Cheol;Kim, Seon Myeong;Kim, Young Bum
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.33
    • /
    • pp.145-153
    • /
    • 2021
  • Purpose: The purpose of this study is to evaluate the ovarian dose during radiation therapy for breast cancer in women of childbearing age through an experiment. The ovarian dose is evaluated by comparing and analyzing between the calculated dose in the treatment planning system according to the treatment technique and the measured dose using a thermoluminescence dosimeter (TLD). The clinical usefulness of lead (Pb) apron is investigated through dose analysis according to whether or not it is used. Materials and Methods: Rando humanoid phantom was used for measurement, and wedge filter radiation therapy, 3D conformal radiation therapy, and intensity modulated radiation therapy were used as treatment techniques. A treatment plan was established so that 95% of the prescribed dose could be delivered to the right breast of the Rando humanoid phantom 3D image obtained using the CT simulator. TLD was inserted into the surface and depth of the virtual ovary of the Rando hunmanoid phantom and irradiated with radiation. The measurement location was the center of treatment and the point moved 2 cm to the opposite breast from the center of the Rando hunmanoid phantom, 5cm, 10cm, 12.5cm, 15cm, 17.5cm, 20cm from the boundary of the right breast to the center of treatment and downward, and the surface and depth of the right ovary. Measurements were made at a total of 9 central points. In the dose comparison of treatment planning systems, two wedge filter treatment techniques, three-dimensional conformal radiotherapy, and intensity-modulated radiation therapy were established and compared. Treatments were compared, and dose measurements according to the use of lead apron were compared and analyzed in intensity-modulated radiation therapy. The measured value was calculated by averaging three TLD values for each point and converting using the TLD calibration value, which was calculated as the point dose mean value. In order to compare the treatment plan value with the actual measured value, the absolute dose value was measured and compared at each point (%Diff). Results: At Point A, the center of treatment, a maximum of 201.7cGy was obtained in the treatment planning system, and a maximum of 200.6cGy was obtained in the TLD. In all treatment planning systems, 0cGy was calculated from Point G, which is a point 17.5cm downward from the breast interface. As a result of TLD, a maximum of 2.6cGy was obtained at Point G, and a maximum of 0.9cGy was obtained at Point J, which is the ovarian dose, and the absolute dose was 0.3%~1.3%. The difference in dose according to the use of lead aprons was from a maximum of 2.1cGy to a minimum of 0.1cGy, and the %Diff value was 0.1%~1.1%. Conclusion: In the treatment planning system, the difference in dose according to the three treatment plans did not show a significant difference from 0.85% to 2.45%. In the ovary, the difference between the Rando humanoid phantom's treatment planning system and the actual measured dose was within 0.9%, and the actual measured dose was slightly higher. This did not accurately reflect the effect of scattered radiation in the treatment planning system, and it is thought that the dose of scattered radiation and the dose taken by CBCT with TLD inserted were reflected in the actual measurement. In dosimetry according to the with or without a lead apron, when a lead apron was used, the closer the distance from the treatment range, the more effective the shielding was. Although it is not clinically appropriate for pregnancy or artificial insemination during radiotherapy, the dose irradiated to the ovaries during treatment is not expected to significantly affect the reproductive function of women of childbearing age after radiotherapy. However, since women of childbearing age have constant anxiety, it is thought that psychological stability can be promoted by presenting the data from this study.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Feasibility Test on Automatic Control of Soil Water Potential Using a Portable Irrigation Controller with an Electrical Resistance-based Watermark Sensor (전기저항식 워터마크센서기반 소형 관수장치의 토양 수분퍼텐셜 자동제어 효용성 평가)

  • Kim, Hak-Jin;Roh, Mi-Young;Lee, Dong-Hoon;Jeon, Sang-Ho;Hur, Seung-Oh;Choi, Jin-Yong;Chung, Sun-Ok;Rhee, Joong-Yong
    • Journal of Bio-Environment Control
    • /
    • v.20 no.2
    • /
    • pp.93-100
    • /
    • 2011
  • Maintenance of adequate soil water potential during the period of crop growth is necessary to support optimum plant growth and yields. A better understanding of soil water movement within and below the rooting zone can facilitate optimal irrigation scheduling aimed at minimizing the adverse effects of water stress on crop growth and development and the leaching of water below the root zone which can have adverse environmental effects. The objective of this study was to evaluate the feasibility of using a portable irrigation controller with an Watermark sensor for the cultivation of drip-irrigated vegetable crops in a greenhouse. The control capability of the irrigation controller for a soil water potential of -20 kPa was evaluated under summer conditions by cultivating 45-day-old tomato plants grown in three differently textured soils (sandy loam, loam, and loamy sands). Water contents through each soil profile were continuously monitored using three Sentek probes, each consisting of three capacitance sensors at 10, 20, and 30 cm depths. Even though a repeatable cycling of soil water potential occurred for the potential treatment, the lower limit of the Watermark (about 0 kPa) obtained in this study presented a limitation of using the Watermark sensor for optimal irrigation of tomato plants where -20 kPa was used as a point for triggering irrigations. This problem might be related to the slow response time and inadequate soil-sensor interface of the Watermark sensor as compared to a porous and ceramic cup-based tensiometer with a sensitive pressure transducer. In addition, the irrigation time of 50 to 60 min at each of the irrigation operation gave a rapid drop of the potential to zero, resulting in over irrigation of tomatoes. There were differences in water content among the three different soil types under the variable rate irrigation, showing a range of water contents of 16 to 24%, 17 to 28%, and 24 to 32% for loamy sand, sandy loam, and loam soils, respectively. The greatest rate increase in water content was observed in the top of 10 cm depth of sandy loam soil within almost 60 min from the start of irrigation.

Building a Korean Sentiment Lexicon Using Collective Intelligence (집단지성을 이용한 한글 감성어 사전 구축)

  • An, Jungkook;Kim, Hee-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.49-67
    • /
    • 2015
  • Recently, emerging the notion of big data and social media has led us to enter data's big bang. Social networking services are widely used by people around the world, and they have become a part of major communication tools for all ages. Over the last decade, as online social networking sites become increasingly popular, companies tend to focus on advanced social media analysis for their marketing strategies. In addition to social media analysis, companies are mainly concerned about propagating of negative opinions on social networking sites such as Facebook and Twitter, as well as e-commerce sites. The effect of online word of mouth (WOM) such as product rating, product review, and product recommendations is very influential, and negative opinions have significant impact on product sales. This trend has increased researchers' attention to a natural language processing, such as a sentiment analysis. A sentiment analysis, also refers to as an opinion mining, is a process of identifying the polarity of subjective information and has been applied to various research and practical fields. However, there are obstacles lies when Korean language (Hangul) is used in a natural language processing because it is an agglutinative language with rich morphology pose problems. Therefore, there is a lack of Korean natural language processing resources such as a sentiment lexicon, and this has resulted in significant limitations for researchers and practitioners who are considering sentiment analysis. Our study builds a Korean sentiment lexicon with collective intelligence, and provides API (Application Programming Interface) service to open and share a sentiment lexicon data with the public (www.openhangul.com). For the pre-processing, we have created a Korean lexicon database with over 517,178 words and classified them into sentiment and non-sentiment words. In order to classify them, we first identified stop words which often quite likely to play a negative role in sentiment analysis and excluded them from our sentiment scoring. In general, sentiment words are nouns, adjectives, verbs, adverbs as they have sentimental expressions such as positive, neutral, and negative. On the other hands, non-sentiment words are interjection, determiner, numeral, postposition, etc. as they generally have no sentimental expressions. To build a reliable sentiment lexicon, we have adopted a concept of collective intelligence as a model for crowdsourcing. In addition, a concept of folksonomy has been implemented in the process of taxonomy to help collective intelligence. In order to make up for an inherent weakness of folksonomy, we have adopted a majority rule by building a voting system. Participants, as voters were offered three voting options to choose from positivity, negativity, and neutrality, and the voting have been conducted on one of the largest social networking sites for college students in Korea. More than 35,000 votes have been made by college students in Korea, and we keep this voting system open by maintaining the project as a perpetual study. Besides, any change in the sentiment score of words can be an important observation because it enables us to keep track of temporal changes in Korean language as a natural language. Lastly, our study offers a RESTful, JSON based API service through a web platform to make easier support for users such as researchers, companies, and developers. Finally, our study makes important contributions to both research and practice. In terms of research, our Korean sentiment lexicon plays an important role as a resource for Korean natural language processing. In terms of practice, practitioners such as managers and marketers can implement sentiment analysis effectively by using Korean sentiment lexicon we built. Moreover, our study sheds new light on the value of folksonomy by combining collective intelligence, and we also expect to give a new direction and a new start to the development of Korean natural language processing.

A Study on the Design of Case-based Reasoning Office Knowledge Recommender System for Office Professionals (사례기반추론을 이용한 사무지식 추천시스템)

  • Kim, Myong-Ok;Na, Jung-Ah
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.131-146
    • /
    • 2011
  • It is becoming more essential than ever for office professionals to become competent in information collection/gathering and problem solving in today's global business society. In particular, office professionals do not only assist simple chores but are also forced to make decisions as quickly and efficiently as possible in problematic situations that can end in either profit or loss to their company. Since office professionals rely heavily on their tacit knowledge to solve problems that arise in everyday business situations, it is truly helpful and efficient to refer to similar business cases from the past and share or reuse such previous business knowledge for better performance results. Case-based reasoning(CBR) is a problem-solving method which utilizes previous similar cases to solve problems. Through CBR, the closest case to the current business situation can be searched and retrieved from the case or knowledge base and can be referred to for a new solution. This reduces the time and resources needed and increase success probability. The main purpose of this study is to design a system called COKRS(Case-based reasoning Office Knowledge Recommender System) and develop a prototype for it. COKRS manages cases and their meta data, accepts key words from the user and searches the casebase for the most similar past case to the input keyword, and communicates with users to collect information about the quality of the case provided and continuously apply the information to update values on the similarity table. Core concepts like system architecture, definition of a case, meta database, similarity table have been introduced, and also an algorithm to retrieve all similar cases from past work history has also been proposed. In this research, a case is best defined as a work experience in office administration. However, defining a case in office administration was not an easy task in reality. We surveyed 10 office professionals in order to get an idea of how to define a case in office administration and found out that in most cases any type of office work is to be recorded digitally and/or non-digitally. Therefore, we have defined a record or document case as for COKRS. Similarity table was composed of items of the result of job analysis for office professionals conducted in a previous research. Values between items of the similarity table were initially set to those from researchers' experiences and literature review. The results of this study could also be utilized in other areas of business for knowledge sharing wherever it is necessary and beneficial to share and learn from past experiences. We expect this research to be a reference for researchers and developers who are in this area or interested in office knowledge recommendation system based on CBR. Focus group interview(FGI) was conducted with ten administrative assistants carefully selected from various areas of business. They were given a chance to try out COKRS in an actual work setting and make some suggestions for future improvement. FGI has identified the user-interface for saving and searching cases for keywords as the most positive aspect of COKRS, and has identified the most urgently needed improvement as transforming tacit knowledge and knowhow into recorded documents more efficiently. Also, the focus group has mentioned that it is essential to secure enough support, encouragement, and reward from the company and promote positive attitude and atmosphere for knowledge sharing for everybody's benefit in the company.

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

Evaluation of Web Service Similarity Assessment Methods (웹서비스 유사성 평가 방법들의 실험적 평가)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component based software development to promote application interaction and integration both within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web service repositories not only be well-structured but also provide efficient tools for developers to find reusable Web service components that meet their needs. As the potential of Web services for service-oriented computing is being widely recognized, the demand for effective Web service discovery mechanisms is concomitantly growing. A number of techniques for Web service discovery have been proposed, but the discovery challenge has not been satisfactorily addressed. Unfortunately, most existing solutions are either too rudimentary to be useful or too domain dependent to be generalizable. In this paper, we propose a Web service organizing framework that combines clustering techniques with string matching and leverages the semantics of the XML-based service specification in WSDL documents. We believe that this is one of the first attempts at applying data mining techniques in the Web service discovery domain. Our proposed approach has several appealing features : (1) It minimizes the requirement of prior knowledge from both service consumers and publishers; (2) It avoids exploiting domain dependent ontologies; and (3) It is able to visualize the semantic relationships among Web services. We have developed a prototype system based on the proposed framework using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web service registries. We report on some preliminary results demonstrating the efficacy of the proposed approach.

  • PDF

A study on the implementation of Medical Telemetry systems using wireless public data network (무선공중망을 이용한 의료 정보 데이터 원격 모니터링 시스템에 관한 연구)

  • 이택규;김영길
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2000.10a
    • /
    • pp.278-283
    • /
    • 2000
  • As information communication technology developed we could check our blood pressure, pulsation electrocardiogram, SpO2 and blood test easily at home. To check our health at ordinary times is able though interlocking the house medical instrument with the wireless public data network This service will help the inconvenience to visit the hospital everytime and will save the individual's time and cost. In each house an organism data which is detected from the human body will be transmitted to the distance hospital and will be essentially applied through wireless public data network The medical information transmit system is utilized by wireless close range network It would transmit the obtained organism signal wirelessly from the personal device to the main center system in the hospital. Remote telemetry system is embodied by utilizing wireless media access protocol. The protocol is embodied by grafting CSMA/CA(Carrier Sense Multiple Access with Collision Avoidance) protocol falling mode which is standards from IEEE 802.11. Among the house care telemetry system which could measure blood pressure, pulsation, electrocardiogram, SpO2 the study embodies the ECC(electrocardiograph) measure part. It within the ECC function into the movable device and add 900㎒ band wireless public data interface. Then the aged, the patients even anyone in the house could obtain ECG and keep, record the data. It would be essential to control those who had a health-examination heart diseases or more complicated heart diseases and to observe the latent heart disease patient continuously. To embody the medical information transmit system which is based on wireless network. It would transmit the ECG data among the organism signal data which would be utilized by wireless network modem and NCL(Native Control Language) protocol to contact through wireless network Through the SCR(Standard Context Routing) protocol in the network it will be connected to the wired host computer. The computer will check the recorded individual information and the obtained ECC data then send the correspond examination to the movable device. The study suggests the medical transmit system model utilized by the wireless public data network.

  • PDF

The intrinsic instabilities of fluid flow occured in the melt of Czochralski crystal growth system

  • Yi, Kyung-Woo;Koichi Kakimoto;Minoru Eguchi;Taketoshi Hibiya
    • Proceedings of the Korea Association of Crystal Growth Conference
    • /
    • 1996.06a
    • /
    • pp.179-200
    • /
    • 1996
  • The intrinsic instabilities of fluid flow occurred in the melt of the Czochralski crystal growth system Czochralski method, asymmetric flow patterns and temperature profiles in the melt have been studied by many researchers. The idea that the non-symmetric structure of the growing equipment is responsible for the asymmetric profiles is usually accepted at the first time. However further researches revealed that some intrinsic instabilities not related to the non-symmetric equipment structure in the melt could also appear. Ristorcelli had pointed out that there are many possible causes of instabilities in the melt. The instabilities appears because of the coupling effects of fluid flow and temperature profiles in the melt. Among the instabilities, the B nard type instabilities with no or low crucible rotation rates are analyzed by the visualizing experiments using X-ray radiography and the 3-D numerical simulation in this study. The velocity profiles in the Silicon melt at different crucible rotation rates were measured using X-ray radiography method using tungsten tracers in the melt. The results showed that there exits two types of fluid flow mode. One is axisymmetric flow, the other is asymmetric flow. In the axisymmetric flow, the trajectory of the tracers show torus pattern. However, more exact measurement of the axisymmetrc case shows that this flow field has small non-axisymmetric components of the velocity. When fluid flow is asymmetric, the tracers show random motion from the fixed view point. On the other hand, when the observer rotates to the same velocity of the crucible, the trajectory of the tracer show a rotating motion, the center of the motion is not same the center of the melt. The temperature of a point in the melt were measured using thermocouples with different rotating rates. Measured temperatures oscillated. Such kind of oscillations are also measured by the other researchers. The behavior of temperature oscillations were quite different between at low rotations and at high rotations. Above experimental results means that the fluid flow and temperature profiles in the melt is not symmetric, and then the mode of the asymmetric is changed when rotation rates are changed. To compare with these experimental results, the fluid flow and temperature profiles at no rotation and 8 rpm of crucible rotation rates on the same size of crucible is calculated using a 3-dimensional numerical simulation. A finite different method is adopted for this simulation. 50×30×30 grids are used. The numerical simulation also showed that the velocity and flow profiles are changed when rotation rates change. Futhermore, the flow patterns and temperature profiles of both cases are not axisymmetric even though axisymmetric boundary conditions are used. Several cells appear at no rotation. The cells are formed by the unstable vertical temperature profiles (upper region is colder than lower part) beneath the free surface of the melt. When the temperature profile is combined with density difference (Rayleigh-B nard instability) or surface tension difference (Marangoni-B nard instability) on temperature, cell structures are naturally formed. Both sources of instabilities are coupled to the cell structures in the melt of the Czochralski process. With high rotation rates, the shape of the fluid field is changed to another type of asymmetric profile. Because of the velocity profile, isothermal lines on the plane vertical to the centerline change to elliptic. When the velocity profiles are plotted at the rotating view point, two vortices appear at the both sides of centerline. These vortices seem to be the main reason of the tracer behavior shown in the asymmetric velocity experiment. This profile is quite similar to the profiles created by the baroclinic instability on the rotating annulus. The temperature profiles obtained from the numerical calculations and Fourier transforms of it are quite similar to the results of the experiment. bove esults intend that at least two types of intrinsic instabilities can occur in the melt of Czochralski growing systems. Because the instabilities cause temperature fluctuations in the melt and near the crystal-melt interface, some defects may be generated by them. When the crucible size becomes large, the intensity of the instabilities should increase. Therefore, to produce large single crystals with good quality, the behavior of the intrinsic instabilities in the melt as well as the effects of the instabilities on the defects in the ingot should be studied. As one of the cause of the defects in the large diameter Silicon single crystal grown by the

  • PDF