• Title/Summary/Keyword: Computer Applications

Search Result 5,237, Processing Time 0.034 seconds

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

Permanent Preservation and Use of Historical Archives : Preservation Issues Digitization of Historical Collection (역사기록물(Archives)의 항구적인 보존화 이용 : 보존전략과 디지털정보화)

  • Lee, Sang-min
    • The Korean Journal of Archival Studies
    • /
    • no.1
    • /
    • pp.23-76
    • /
    • 2000
  • In this paper, I examined what have been researched and determined about preservation strategy and selection of preservation media in the western archival community. Archivists have primarily been concerned with 'preservation' and 'use' of archival materials worth of being preserved permanently. In the new information era, preservation and use of archival materials were faced with new challenge. Life expectancy of paper records was shortened due to acidification and brittleness of the modem papers. Also emergence of information technology affects the traditional way of preservation and use of archival materials. User expectations are becoming so high technology-oriented and so complicated as to make archivists act like information managers using computer technology rather than traditional archival handicraft. Preservation strategy plays an important role in archival management as well as information management. For a cost-effective management of archives and archival institutions, preservation strategy is a must. The preservation strategy encompasses all aspects of archival preservation process and practices, from selection of archives, appraisal, inventorying, arrangement, description, conservation, microfilming or digitization, archival buildings, and access service. Those archival functions should be considered in their relations to each other to ensure proper preservation of archival materials. In the integrated preservation strategy, 'preservation' and 'use' should be combined and fulfilled without sacrificing the other. Preservation strategy planning is essential to determine the policies of archives to preserve their holdings safe and provide people with a maximum access in most effective ways. Preservation microfilming is to ensure permanent preservation of information held in important archival materials. To do this, a detailed standardization has been developed to guarantee the permanence of microfilm as well as its product quality. Silver gelatin film can last up to 500 years in the optimum storage environment and the most viable option for permanent preservation media. ISO and ANIS developed such standards for the quality of microfilms and microfilming technology. Preservation microfilming guidelines was also developed to ensure effective archival management and picture quality of microfilms. It is essential to assess the need of preservation microfilming. Limit in resources always put a restraint on preservation management. Appraisal (and selection) of what to be preserved was the most important part of preservation microfilming. In addition, microfilms with standard quality can be scanned to produce quality digital images for instant use through internet. As information technology develops, archivists began to utilize information technology to make preservation easier and more economical, and to promote use of archival materials through computer communication network. Digitization was introduced to provide easy and universal access to unique archives, and its large capacity of preserving archival data seems very promising. However, digitization, i.e., transferring images of records to electronic codes, still, needs to be standardized. Digitized data are electronic records, and st present electronic records are very unstable and not to be preserved permanently. Digital media including optical disks materials have not been proved as reliable media for permanent preservation. Due to their chemical coating and physical character using light, they are not stable and can be preserved at best 100 years in the optimum storage environment. Most CD-R can last only 20 years. Furthermore, obsolescence of hardware and software makes hard to reproduce digital images made from earlier versions. Even if when reformatting is possible, the cost of refreshing or upgrading of digital images is very expensive and the very process has to be done at least every five to ten years. No standard for this obsolescence of hardware and software has come into being yet. In short, digital permanence is not a fact, but remains to be uncertain possibility. Archivists must consider in their preservation planning both risk of introducing new technology and promising possibility of new technology at the same time. In planning digitization of historical materials, archivists should incorporate planning for maintaining digitized images and reformatting them in the coming generations of new applications. Without the comprehensive planning, future use of the expensive digital images will become unavailable. And that is a loss of information, and a final failure of both 'preservation' and 'use' of archival materials. As peter Adelstein said, it is wise to be conservative when considerations of conservations are involved.

Implementing RPA for Digital to Intelligent(D2I) (디지털에서 인텔리전트(D2I)달성을 위한 RPA의 구현)

  • Dong-Jin Choi
    • Information Systems Review
    • /
    • v.21 no.4
    • /
    • pp.143-156
    • /
    • 2019
  • Types of innovation can be categorized into simplification, information, automation, and intelligence. Intelligence is the highest level of innovation, and RPA can be seen as one of intelligence. Robotic Process Automation(RPA), a software robot with artificial intelligence, is an example of intelligence that is suited for simple, repetitive, large-scale transaction processing tasks. The RPA, which is already in operation in many companies in Korea, shows what needs to be done to naturally focus on the core tasks in a situation where the need for a strong organizational culture is increasing and the emphasis is on voluntary leadership, strong teamwork and execution, and a professional working culture. The introduction was considered naturally according to the need to find. Robotic Process Automation, or RPA, is a technology that replaces human tasks with the goal of quickly and efficiently handling structural tasks. RPA is implemented through software robots that mimic humans using software such as ERP systems or productivity tools. RPA robots are software installed on a computer and are called robots by the principle of operation. RPA is integrated throughout the IT system through the front end, unlike traditional software that communicates with other IT systems through the back end. In practice, this means that software robots use IT systems in the same way as humans, repeat the correct steps, and respond to events on the computer screen instead of communicating with the system's application programming interface(API). Designing software that mimics humans to communicate with other software can be less intuitive, but there are many advantages to this approach. First, you can integrate RPA with virtually any software you use, regardless of your openness to third-party applications. Many enterprise IT systems are proprietary because they do not have many common APIs, and their ability to communicate with other systems is severely limited, but RPA solves this problem. Second, RPA can be implemented in a very short time. Traditional software development methods, such as enterprise software integration, are relatively time consuming, but RPAs can be implemented in a relatively short period of two to four weeks. Third, automated processes through software robots can be easily modified by system users. While traditional approaches require advanced coding techniques to drastically modify how they work, RPA can be instructed by modifying relatively simple logical statements, or by modifying screen captures or graphical process charts of human-run processes. This makes RPA very versatile and flexible. This RPA is a good example of the application of digital to intelligence(D2I).

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

Medical Information Dynamic Access System in Smart Mobile Environments (스마트 모바일 환경에서 의료정보 동적접근 시스템)

  • Jeong, Chang Won;Kim, Woo Hong;Yoon, Kwon Ha;Joo, Su Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.47-55
    • /
    • 2015
  • Recently, the environment of a hospital information system is a trend to combine various SMART technologies. Accordingly, various smart devices, such as a smart phone, Tablet PC is utilized in the medical information system. Also, these environments consist of various applications executing on heterogeneous sensors, devices, systems and networks. In these hospital information system environment, applying a security service by traditional access control method cause a problems. Most of the existing security system uses the access control list structure. It is only permitted access defined by an access control matrix such as client name, service object method name. The major problem with the static approach cannot quickly adapt to changed situations. Hence, we needs to new security mechanisms which provides more flexible and can be easily adapted to various environments with very different security requirements. In addition, for addressing the changing of service medical treatment of the patient, the researching is needed. In this paper, we suggest a dynamic approach to medical information systems in smart mobile environments. We focus on how to access medical information systems according to dynamic access control methods based on the existence of the hospital's information system environments. The physical environments consist of a mobile x-ray imaging devices, dedicated mobile/general smart devices, PACS, EMR server and authorization server. The software environment was developed based on the .Net Framework for synchronization and monitoring services based on mobile X-ray imaging equipment Windows7 OS. And dedicated a smart device application, we implemented a dynamic access services through JSP and Java SDK is based on the Android OS. PACS and mobile X-ray image devices in hospital, medical information between the dedicated smart devices are based on the DICOM medical image standard information. In addition, EMR information is based on H7. In order to providing dynamic access control service, we classify the context of the patients according to conditions of bio-information such as oxygen saturation, heart rate, BP and body temperature etc. It shows event trace diagrams which divided into two parts like general situation, emergency situation. And, we designed the dynamic approach of the medical care information by authentication method. The authentication Information are contained ID/PWD, the roles, position and working hours, emergency certification codes for emergency patients. General situations of dynamic access control method may have access to medical information by the value of the authentication information. In the case of an emergency, was to have access to medical information by an emergency code, without the authentication information. And, we constructed the medical information integration database scheme that is consist medical information, patient, medical staff and medical image information according to medical information standards.y Finally, we show the usefulness of the dynamic access application service based on the smart devices for execution results of the proposed system according to patient contexts such as general and emergency situation. Especially, the proposed systems are providing effective medical information services with smart devices in emergency situation by dynamic access control methods. As results, we expect the proposed systems to be useful for u-hospital information systems and services.

Energy-Efficient Division Protocol for Mobile Sink Groups in Wireless Sensor Network (무선 센서 네트워크에서 이동 싱크 그룹의 분리를 지원하기 위한 라우팅 프로토콜)

  • Jang, Jaeyoung;Lee, Euisin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.1
    • /
    • pp.1-8
    • /
    • 2017
  • Communications for mobile sink groups such as rescue teams or platoons bring about a new challenging issue for handling mobility in wireless sensor networks. To do this, many studies have been proposed to support mobile sink groups. When closely looking at mobile sink groups, they can be divided into (multiple) small groups according to the property of applications. For example, a platoon can be divided into multiple squads to carry out its mission in the battle field. However, the previous studies cannot efficiently support the division of mobile sink groups because they do not address three challenging issues engendered by the mobile sink group division. The first issue is to select a leader sink for a new small mobile sink group. The efficient data delivery from a source to small mobile sink groups is the second issue. Last, the third issue is to share data between leader sinks of small mobile sink groups. Thus, this paper proposes a routing protocol to efficiently support the division of mobile sink groups by solving the three challenging issues. For the first issue, the proposed protocol selects a leader sink of a new small mobile sink group which provide a minimum summation of the distance between the new leader sink and the previous leader sink and the distance from the new leader sink to all of its member sinks. For the efficient data delivery from a source to small mobile sink groups in the second issue, the proposed protocol determines the path to minimize the data dissemination distance from source to small mobile sink group by calculating with the location information of both the source and the leader sinks. With regard to the third issue, the proposed protocol exploits member sinks located among leader sinks to provide efficient data sharing among leaders sinks by considering the location information of member sinks. Simulation results verified that the proposed protocol is superior to the previous protocol in terms of the energy consumption.

Analysis of Internet Biology Study Sites and Guidelines for Constructing Educational Homepages (인터넷상의 고등학교 생물 학습사이트 비교분석 및 웹사이트 구축방안에 관한 연구)

  • Kim, Joo-Hyun;Sung, Jung-Hee
    • Journal of The Korean Association For Science Education
    • /
    • v.22 no.4
    • /
    • pp.779-795
    • /
    • 2002
  • Internet, a world wide network of computers, is considered as a sea of information because it allows people to share information beyond the barriors of time and space. However, in spite of the unmeasurable potential applications of the internet, its use in the field of biology education has been extremely limited mainly due to the scarcity of good biology-related sites. In order to provide useful guidelines for constructing user-friendly study sites, which can help high school students with different intellectual levels to study biology, comparative studies were performed on selected educational sites. Initially, hundreds of related sites were examined, and, subsequently, four distinct sites were selected not only because they are well organized, but also because each is unique in its contents. Also, a survey was carried out against the users of each site. The survey results indicated that the high school students regard the web-based biology study tools as effective teaching methods although there might be some bias in criteria for selecting target sites. In addition to the detailed biology topics and the related biology informations, multimedia data including pictures, animations and movies are found to be one of the important ingredients for desirable biology study sites. Thus, the inclusion of multimedia components should also be considered when developing a systematic biology study site. Overall, the role of the cyber space is expected to become more and more important. Since the development of the user-satisfied and self-guided sites require interdisciplinary collaborational efforts which should be made to promote extensive communication among teachers, education professionals, and computer engineers. Furthermore, the introduction of good biology study sites to the students by their teachers is also important factor for the successful web-based education.

Usefulness of Data Mining in Criminal Investigation (데이터 마이닝의 범죄수사 적용 가능성)

  • Kim, Joon-Woo;Sohn, Joong-Kweon;Lee, Sang-Han
    • Journal of forensic and investigative science
    • /
    • v.1 no.2
    • /
    • pp.5-19
    • /
    • 2006
  • Data mining is an information extraction activity to discover hidden facts contained in databases. Using a combination of machine learning, statistical analysis, modeling techniques and database technology, data mining finds patterns and subtle relationships in data and infers rules that allow the prediction of future results. Typical applications include market segmentation, customer profiling, fraud detection, evaluation of retail promotions, and credit risk analysis. Law enforcement agencies deal with mass data to investigate the crime and its amount is increasing due to the development of processing the data by using computer. Now new challenge to discover knowledge in that data is confronted to us. It can be applied in criminal investigation to find offenders by analysis of complex and relational data structures and free texts using their criminal records or statement texts. This study was aimed to evaluate possibile application of data mining and its limitation in practical criminal investigation. Clustering of the criminal cases will be possible in habitual crimes such as fraud and burglary when using data mining to identify the crime pattern. Neural network modelling, one of tools in data mining, can be applied to differentiating suspect's photograph or handwriting with that of convict or criminal profiling. A case study of in practical insurance fraud showed that data mining was useful in organized crimes such as gang, terrorism and money laundering. But the products of data mining in criminal investigation should be cautious for evaluating because data mining just offer a clue instead of conclusion. The legal regulation is needed to control the abuse of law enforcement agencies and to protect personal privacy or human rights.

  • PDF

Comparison between the Calculated and Measured Doses in the Rectum during High Dose Rate Brachytherapy for Uterine Cervical Carcinomas (자궁암의 고선량율 근접 방사선치료시 전산화 치료계획 시스템과 in vivo dosimetry system 을 이용하여 측정한 직장 선량 비교)

  • Chung, Eun-Ji;Lee, Sang-Hoon
    • Radiation Oncology Journal
    • /
    • v.20 no.4
    • /
    • pp.396-404
    • /
    • 2002
  • Purpose : Many papers support a correlation between rectal complications and rectal doses in uterine cervical cancer patients treated with radical radiotherapy. In vivo dosimetry in the rectum following the ICRU report 38 contributes to the quality assurance in HDR brachytherapy, especially in minimizing side effects. This study compares the rectal doses calculated in the radiation treatment planning system to that measured with a silicon diode the in vivo dosimetry system. Methods : Nine patients, with a uterine cervical carcinoma, treated with Iridium-192 high dose rate brachytherapy between June 2001 and Feb. 2002, were retrospectively analysed. Six to eight-fractions of high dose rate (HDR)-intracavitary radiotherapy (ICR) were delivered two times per week, with a total dose of $28\~32\;Gy$ to point A. In 44 applications, to the 9 patients, the measured rectal doses were analyzed and compared with the calculated rectal doses using the radiation treatment planning system. Using graphic approximation methods, in conjunction with localization radiographs, the expected dose values at the detector points of an intrarectal semiconductor dosimeter, were calculated. Results : There were significant differences between the calculated rectal doses, based on the simulation radiographs, and the calculated rectal doses, based on the radiographs in each fraction of the HDR ICR. Also, there were significant differences between the calculated and measured rectal doses based on the in-vivo diode dosimetry system. The rectal reference point on the anteroposterior line drawn through the lower end of the uterine sources, according to ICRU 38 report, received the maximum rectal doses in only 2 out of the nine patients $(22.2\%)$. Conclusion : In HDR ICR planning for conical cancer, optimization of the dose to the rectum by the computer-assisted planning system, using radiographs in simulation, is improper. This study showed that in vivo rectal dosimetry, using a diode detector during the HDR ICR, could have a useful role in quality control for HDR brachytherapy in cervical carcinomas. The importance of individual dosimeters for each HDR ICR is clear. In some departments that do not have the in vivo dosimetry system, the radiation oncologist has to find, from lateral fluoroscopic findings, the location of the rectal marker before each fractionated HDR brachytherapy, which is a necessary and important step of HDR brachytherapy for cervical cancer.