• Title/Summary/Keyword: Time graph

Search Result 938, Processing Time 0.03 seconds

A Semantic Classification Model for e-Catalogs (전자 카탈로그를 위한 의미적 분류 모형)

  • Kim Dongkyu;Lee Sang-goo;Chun Jonghoon;Choi Dong-Hoon
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.102-116
    • /
    • 2006
  • Electronic catalogs (or e-catalogs) hold information about the goods and services offered or requested by the participants, and consequently, form the basis of an e-commerce transaction. Catalog management is complicated by a number of factors and product classification is at the core of these issues. Classification hierarchy is used for spend analysis, custom3 regulation, and product identification. Classification is the foundation on which product databases are designed, and plays a central role in almost all aspects of management and use of product information. However, product classification has received little formal treatment in terms of underlying model, operations, and semantics. We believe that the lack of a logical model for classification Introduces a number of problems not only for the classification itself but also for the product database in general. It needs to meet diverse user views to support efficient and convenient use of product information. It needs to be changed and evolved very often without breaking consistency in the cases of introduction of new products, extinction of existing products, class reorganization, and class specialization. It also needs to be merged and mapped with other classification schemes without information loss when B2B transactions occur. For these requirements, a classification scheme should be so dynamic that it takes in them within right time and cost. The existing classification schemes widely used today such as UNSPSC and eClass, however, have a lot of limitations to meet these requirements for dynamic features of classification. In this paper, we try to understand what it means to classify products and present how best to represent classification schemes so as to capture the semantics behind the classifications and facilitate mappings between them. Product information implies a plenty of semantics such as class attributes like material, time, place, etc., and integrity constraints. In this paper, we analyze the dynamic features of product databases and the limitation of existing code based classification schemes. And describe the semantic classification model, which satisfies the requirements for dynamic features oi product databases. It provides a means to explicitly and formally express more semantics for product classes and organizes class relationships into a graph. We believe the model proposed in this paper satisfies the requirements and challenges that have been raised by previous works.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Clustering Method based on Genre Interest for Cold-Start Problem in Movie Recommendation (영화 추천 시스템의 초기 사용자 문제를 위한 장르 선호 기반의 클러스터링 기법)

  • You, Tithrottanak;Rosli, Ahmad Nurzid;Ha, Inay;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.57-77
    • /
    • 2013
  • Social media has become one of the most popular media in web and mobile application. In 2011, social networks and blogs are still the top destination of online users, according to a study from Nielsen Company. In their studies, nearly 4 in 5active users visit social network and blog. Social Networks and Blogs sites rule Americans' Internet time, accounting to 23 percent of time spent online. Facebook is the main social network that the U.S internet users spend time more than the other social network services such as Yahoo, Google, AOL Media Network, Twitter, Linked In and so on. In recent trend, most of the companies promote their products in the Facebook by creating the "Facebook Page" that refers to specific product. The "Like" option allows user to subscribed and received updates their interested on from the page. The film makers which produce a lot of films around the world also take part to market and promote their films by exploiting the advantages of using the "Facebook Page". In addition, a great number of streaming service providers allows users to subscribe their service to watch and enjoy movies and TV program. They can instantly watch movies and TV program over the internet to PCs, Macs and TVs. Netflix alone as the world's leading subscription service have more than 30 million streaming members in the United States, Latin America, the United Kingdom and the Nordics. As the matter of facts, a million of movies and TV program with different of genres are offered to the subscriber. In contrast, users need spend a lot time to find the right movies which are related to their interest genre. Recent years there are many researchers who have been propose a method to improve prediction the rating or preference that would give the most related items such as books, music or movies to the garget user or the group of users that have the same interest in the particular items. One of the most popular methods to build recommendation system is traditional Collaborative Filtering (CF). The method compute the similarity of the target user and other users, which then are cluster in the same interest on items according which items that users have been rated. The method then predicts other items from the same group of users to recommend to a group of users. Moreover, There are many items that need to study for suggesting to users such as books, music, movies, news, videos and so on. However, in this paper we only focus on movie as item to recommend to users. In addition, there are many challenges for CF task. Firstly, the "sparsity problem"; it occurs when user information preference is not enough. The recommendation accuracies result is lower compared to the neighbor who composed with a large amount of ratings. The second problem is "cold-start problem"; it occurs whenever new users or items are added into the system, which each has norating or a few rating. For instance, no personalized predictions can be made for a new user without any ratings on the record. In this research we propose a clustering method according to the users' genre interest extracted from social network service (SNS) and user's movies rating information system to solve the "cold-start problem." Our proposed method will clusters the target user together with the other users by combining the user genre interest and the rating information. It is important to realize a huge amount of interesting and useful user's information from Facebook Graph, we can extract information from the "Facebook Page" which "Like" by them. Moreover, we use the Internet Movie Database(IMDb) as the main dataset. The IMDbis online databases that consist of a large amount of information related to movies, TV programs and including actors. This dataset not only used to provide movie information in our Movie Rating Systems, but also as resources to provide movie genre information which extracted from the "Facebook Page". Formerly, the user must login with their Facebook account to login to the Movie Rating System, at the same time our system will collect the genre interest from the "Facebook Page". We conduct many experiments with other methods to see how our method performs and we also compare to the other methods. First, we compared our proposed method in the case of the normal recommendation to see how our system improves the recommendation result. Then we experiment method in case of cold-start problem. Our experiment show that our method is outperform than the other methods. In these two cases of our experimentation, we see that our proposed method produces better result in case both cases.

Study on the Chemical Management - 2. Comparison of Classification and Health Index of Chemicals Regulated by the Ministry of Environment and the Ministry of the Employment and Labor (화학물질 관리 연구-2. 환경부와 고용노동부의 관리 화학물질의 구분, 노출기준 및 독성 지표 등의 특성 비교)

  • Kim, Sunju;Yoon, Chungsik;Ham, Seunghon;Park, Jihoon;Kim, Songha;Kim, Yuna;Lee, Jieun;Lee, Sangah;Park, Donguk;Lee, Kwonseob;Ha, Kwonchul
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.25 no.1
    • /
    • pp.58-71
    • /
    • 2015
  • Objectives: The aims of this study were to investigate the classification system of chemical substances in the Occupational Safety and Health Act(OSHA) and Chemical Substances Control Act(CSCA) and to compare several health indices (i.e., Time Weighted Average (TWA), Lethal Dose ($LD_{50}$), and Lethal Concentration ($LC_{50}$) of chemical substances by categories in each law. Methods: The chemicals regulated by each law were classified by the specific categories provided in the respective law; seven categories for OSHA (chemicals with OELs, chemicals prohibited from manufacturing, etc., chemicals requiring approval, chemicals kept below permissible limits, chemicals requiring workplace monitoring, chemicals requiring special management, and chemicals requiring special heath diagnosis) and five categories from the CSCA(poisonous substances, permitted substances, restricted substances, prohibited substances, and substances requiring preparation for accidents). Information on physicochemical properties, health indices including CMR characteristics, $LD_{50}$ and $LD_{50}$ were searched from the homepages of the Korean Occupational and Safety Agency and the National Institute of Environmental Research, etc. Statistical analysis was conducted for comparison between TWA and health index for each category. Results: The number of chemicals based on CAS numbers was different from the numbers of series of chemicals listed in each law because of repeat listings due to different names (e.g., glycol monoethylether vs. 2-ethoxy ethanol) and grouping of different chemicals under the same serial number(i.e., five different benzidine-related chemicals were categorized under one serial number(06-4-13) as prohibited substances under the CSCA). A total of 722 chemicals and 995 chemicals were listed at the OSHA and its sub-regulations and CSCA and its sub-regulations, respectively. Among these, 36.8% based on OSHA chemicals and 26.7% based on CSCA chemicals were regulated simultaneously through both laws. The correlation coefficients between TWA and $LC_{50}$ and between TWA and $LD_{50}$, were 0.641 and 0.506, respectively. The geometric mean values of TWA calculated by each category in both laws have no tendency according to category. The patterns of cumulative graph for TWA, $LD_{50}$, $LC_{50}$ were similar to the chemicals regulated by OHSA and CCSA, but their median values were lower for CCSA regulated chemicals than OSHA regulated chemicals. The GM of carcinogenic chemicals under the OSHA was significantly lower than non-CMR chemicals($2.21mg/m^3$ vs $5.69mg/m^3$, p=0.006), while there was no significant difference in CSCA chemicals($0.85mg/m^3$ vs $1.04mg/m^3$, p=0.448). $LC_{50}$ showed no significant difference between carcinogens, mutagens, reproductive toxic chemicals and non-CMR chemicals in both laws' regulated chemicals, while there was a difference between carcinogens and non-CMR chemicals in $LD_{50}$ of the CSCA. Conclusions: This study found that there was no specific tendency or significant difference in health indicessuch TWA, $LD_{50}$ and $LC_{50}$ in subcategories of chemicals as classified by the Ministry of Labor and Employment and the Ministry of Environment. Considering the background and the purpose of each law, collaboration for harmonization in chemical categorizing and regulation is necessary.

The Application of Operations Research to Librarianship : Some Research Directions (운영연구(OR)의 도서관응용 -그 몇가지 잠재적응용분야에 대하여-)

  • Choi Sung Jin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.4
    • /
    • pp.43-71
    • /
    • 1975
  • Operations research has developed rapidly since its origins in World War II. Practitioners of O. R. have contributed to almost every aspect of government and business. More recently, a number of operations researchers have turned their attention to library and information systems, and the author believes that significant research has resulted. It is the purpose of this essay to introduce the library audience to some of these accomplishments, to present some of the author's hypotheses on the subject of library management to which he belives O. R. has great potential, and to suggest some future research directions. Some problem areas in librianship where O. R. may play a part have been discussed and are summarized below. (1) Library location. It is usually necessary to make balance between accessibility and cost In location problems. Many mathematical methods are available for identifying the optimal locations once the balance between these two criteria has been decided. The major difficulties lie in relating cost to size and in taking future change into account when discriminating possible solutions. (2) Planning new facilities. Standard approaches to using mathematical models for simple investment decisions are well established. If the problem is one of choosing the most economical way of achieving a certain objective, one may compare th althenatives by using one of the discounted cash flow techniques. In other situations it may be necessary to use of cost-benefit approach. (3) Allocating library resources. In order to allocate the resources to best advantage the librarian needs to know how the effectiveness of the services he offers depends on the way he puts his resources. The O. R. approach to the problems is to construct a model representing effectiveness as a mathematical function of levels of different inputs(e.g., numbers of people in different jobs, acquisitions of different types, physical resources). (4) Long term planning. Resource allocation problems are generally concerned with up to one and a half years ahead. The longer term certainly offers both greater freedom of action and greater uncertainty. Thus it is difficult to generalize about long term planning problems. In other fields, however, O. R. has made a significant contribution to long range planning and it is likely to have one to make in librarianship as well. (5) Public relations. It is generally accepted that actual and potential users are too ignorant both of the range of library services provided and of how to make use of them. How should services be brought to the attention of potential users? The answer seems to lie in obtaining empirical evidence by controlled experiments in which a group of libraries participated. (6) Acquisition policy. In comparing alternative policies for acquisition of materials one needs to know the implications of each service which depends on the stock. Second is the relative importance to be ascribed to each service for each class of user. By reducing the level of the first, formal models will allow the librarian to concentrate his attention upon the value judgements which will be necessary for the second. (7) Loan policy. The approach to choosing between loan policies is much the same as the previous approach. (8) Manpower planning. For large library systems one should consider constructing models which will permit the skills necessary in the future with predictions of the skills that will be available, so as to allow informed decisions. (9) Management information system for libraries. A great deal of data can be available in libraries as a by-product of all recording activities. It is particularly tempting when procedures are computerized to make summary statistics available as a management information system. The values of information to particular decisions that may have to be taken future is best assessed in terms of a model of the relevant problem. (10) Management gaming. One of the most common uses of a management game is as a means of developing staff's to take decisions. The value of such exercises depends upon the validity of the computerized model. If the model were sufficiently simple to take the form of a mathematical equation, decision-makers would probably able to learn adequately from a graph. More complex situations require simulation models. (11) Diagnostics tools. Libraries are sufficiently complex systems that it would be useful to have available simple means of telling whether performance could be regarded as satisfactory which, if it could not, would also provide pointers to what was wrong. (12) Data banks. It would appear to be worth considering establishing a bank for certain types of data. It certain items on questionnaires were to take a standard form, a greater pool of data would de available for various analysis. (13) Effectiveness measures. The meaning of a library performance measure is not readily interpreted. Each measure must itself be assessed in relation to the corresponding measures for earlier periods of time and a standard measure that may be a corresponding measure in another library, the 'norm', the 'best practice', or user expectations.

  • PDF

The Influence of Musical Activities on Social and Emotional Behavior of Infants (음악활동이 영아의 사회·정서적 행동에 미치는 영향 - 만 2세(25-36개월) 영아를 중심으로)

  • Nam, Ok Seon
    • Journal of Music and Human Behavior
    • /
    • v.4 no.2
    • /
    • pp.18-40
    • /
    • 2007
  • The purpose of this study was to verify the influence of musical activities on social and emotional behavior of infants, through providing musical activities to the infants who were cared in a nursery and observing the interactions between peer group, or a therapist and an infant derived during them. The subject is 24 infants who are under 2 years of age(25~36months) at two nurseries located in Bundang area, and 13 infants among them are randomly assigned as study group and 11 infants as control group. Pretest and posttest about social and emotional behavior are performed, and ITSEA developed by Brggs-Gowan and Carter(2001) and amended by Shin Ji Yeon(2004) was used as an evaluation tool. Infants' changes of interaction behaviors during musical play were also analyzed quantitatively and qualitatively, and for the analysis, infant's social play behavior examination tool developed by Holloway and Reichhart-Erickson(1988) was adopted. Based on time sampling method, each item of this tool was evaluated. Each session was performed for 15 minutes, and 60 times of analysis about interactions per session was conducted at every 15 seconds. The analysis result was showed with a table and a graph, and described qualitatively about behavior changes. When compared social and emotional positive behavior average figures and negative behavior average figures between study group and control group, this study showed that the positive behavior figure of study group was increased and the negative behavior figure was decreased. While concentration and empathy among positive behaviors increased meaningfully, aggression, defiance, separation anxiety and rejection to new things among negative activities also decreased meaningfully. The conclusion of this study is as follows. First, interactions with peers or a therapist based on music and musical experience make an effect on strengthening positive behavior among social and emotional behavior and decreasing negative behavior. Second, music has influence on negative behaviors more than positive behaviors of an infant, and produces a good effect on sub behaviors of negative behaviors specially.

  • PDF

Comparison of Center Error or X-ray Field and Light Field Size of Diagnostic Digital X-ray Unit according to the Hospital Grade (병원 등급에 따른 X선조사야와 광조사야 간의 면적 및 중심점 오차 비교)

  • Lee, Won-Jeong;Song, Gyu-Ri;Shin, Hyun-yi
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.3
    • /
    • pp.245-252
    • /
    • 2020
  • The purpose of this study was intended to recognize the importance of quality control (QC) in order to reduce exposure and improve image quality by comparing the center-point (CP) of according to hospital grade and the difference between X-ray field (XF) and light field (LF) in diagnostic digital X-ray devices. XF and LF size, CP were measured in 12 digital X-ray devices at 10 hospitals located in 00 metropolitan cities. Phantom was made in different width respectively, using 0.8 mm wire after attaching to the standardized graph paper on transparent plastic plate and marked as cross wire in the center of the phantom. After placing the phantom on the table of the digital X-ray device, the images were obtained by shooting it vertically each field of survey. All images were acquired under the same conditions of exposure at distance of 100cm between the focus-detector. XF and LF size, CP error were measured using the picture archiving communication system. data were expressed as mean with standard error and then analyzed using SPSS ver. 22.0. The difference in field between the XF and LF size was the smallest in clinic, followed by university hospitals, hospitals and general hospitals. Based on the university hospitals with the least CP error, there was a statistically significant difference in CP error between university hospitals and clinics (p=0.024). Group less than 36-month after QC had fewer statistical errors than 36-month group (0.26 vs. 0.88, p=0.036). The difference between the XF and LF size was the lowest in clinic and CP error was the lowest in university hospital. Moreover, hospitals with short period of time after QC have fewer CP error and it means that introduction of timely QC according to the QC items is essential.

The Influence of Iteration and Subset on True X Method in F-18-FPCIT Brain Imaging (F-18-FPCIP 뇌 영상에서 True-X 재구성 기법을 기반으로 했을 때의 Iteration과 Subset의 영향)

  • Choi, Jae-Min;Kim, Kyung-Sik;NamGung, Chang-Kyeong;Nam, Ki-Pyo;Im, Ki-Cheon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.122-126
    • /
    • 2010
  • Purpose: F-18-FPCIT that shows strong familiarity with DAT located at a neural terminal site offers diagnostic information about DAT density state in the region of the striatum especially Parkinson's disease. In this study, we altered the iteration and subset and measured SUV${\pm}$SD and Contrasts from phantom images which set up to specific iteration and subset. So, we are going to suggest the appropriate range of the iteration and subset. Materials and Methods: This study has been performed with 10 normal volunteers who don't have any history of Parkinson's disease or cerebral disease and Flangeless Esser PET Phantom from Data Spectrum Corporation. $5.3{\pm}0.2$ mCi of F-18-FPCIT was injected to the normal group and PET Phantom was assembled by ACR PET Phantom Instructions and it's actual ratio between hot spheres and background was 2.35 to 1. Brain and Phantom images were acquired after 3 hours from the time of the injection and images were acquired for ten minutes. Basically, SIEMENS Bio graph 40 True-point was used and True-X method was applied for image reconstruction method. The iteration and Subset were set to 2 iterations, 8 subsets, 3 iterations, 16 subsets, 6 iterations, 16 subsets, 8 iterations, 16 subsets and 8 iterations, 21 subsets respectively. To measure SUVs on the brain images, ROIs were drawn on the right Putamen. Also, Coefficient of variance (CV) was calculated to indicate the uniformity at each iteration and subset combinations. On the phantom study, we measured the actual ratio between hot spheres and back ground at each combinations. Same size's ROIs were drawn on the same slide and location. Results: Mean SUVs were 10.60, 12.83, 13.87, 13.98 and 13.5 at each combination. The range of fluctuation by sets were 22.36%, 10.34%, 1.1%, and 4.8% respectively. The range of fluctuation of mean SUV was lowest between 6 iterations 16 subsets and 8 iterations 16 subsets. CV showed 9.07%, 11.46%, 13.56%, 14.91% and 19.47% respectively. This means that the numerical value of the iteration and subset gets higher the image's uniformity gets worse. The range of fluctuation of CV by sets were 2.39, 2.1, 1.35, and 4.56. The range of fluctuation of uniformity was lowest between 6 iterations, 16 subsets and 8 iterations, 16 subsets. In the contrast test, it showed 1.92:1, 2.12:1, 2.10:1, 2.13:1 and 2.11:1 at each iteration and subset combinations. A Setting of 8 iterations and 16 subsets reappeared most close ratio between hot spheres and background. Conclusion: Findings on this study, SUVs and uniformity might be calculated differently caused by variable reconstruction parameters like filter or FWHM. Mean SUV and uniformity showed the lowest range of fluctuation at 6 iterations 16 subsets and 8 iterations 16 subsets. Also, 8 iterations 16 subsets showed the nearest hot sphere to background ratio compared with others. But it can not be concluded that only 6 iterations 16 subsets and 8 iterations 16 subsets can make right images for the clinical diagnosis. There might be more factors that can make better images. For more exact clinical diagnosis through the quantitative analysis of DAT density in the region of striatum we need to secure healthy people's quantitative values.

  • PDF

Cryosurgery of Lung with 2.4 mm Cryoprobe: An Experimental in vivo Study of the Cryosurgery in Canine Model (냉동침을 이용한 폐 냉동수술의 동물실험: 냉동수술 방법의 비교 실험)

  • Kim Kwang-Taik;Chung Bong-Kyu;Lee Sung-Ho;Cho Jong-Ho;Son Ho-Sung;Fang Young-Ho;Sun Kyung;Park Sung-Min
    • Journal of Chest Surgery
    • /
    • v.39 no.7 s.264
    • /
    • pp.520-526
    • /
    • 2006
  • Background: The clinical application of cryosurgery the management of lung cancer is limited because the response of lung at low temperature is not well understood. The purpose of this study is to investigate the response of the pulmonary tissue at extreme low temperature. Material and Method: After general anesthesia the lungs of twelve Mongrel dogs were exposed through the fifth intercostal space. Cryosurgical probe (Galil Medical, Israel) with diameter 2.4 mm were placed into the lung 20 mm deep and four thermosensors (T1-4) were inserted at 5 mm intervals from the cryoprobe. The animals were divided into group A (n=8) and group B (n=4). In group A the temperature of the cryoprobe was decreased to $-120^{\circ}C$ and maintained for 20 minutes. After 5 minutes of thawing this freezing cycle was repeated. In group B same freezing temperature was maintained for 40 minutes continuously without thawing. The lungs were removed for microscopic examination on f day after the cryosurgery. In four dogs of the group A the lung was removed 7 days after the cryosurgery to examine the delayed changes of the cryoinjured tissue, Result: In group A the temperatures of T1 and T2 were decreased to the $4.1{\pm}11^{\circ}C\;and\;31{\pm}5^{\circ}C$ respectively in first freezing cycle. During the second freezing period the temperatures of the thermosensors were decreased lower than the temperature during the first freezing time: $T1\;-56.4{\pm}9.7^{\circ}C,\;T2\;-18.4{\pm}14.2^{\circ}C,\;T3\;18.5{\pm}9.4^{\circ}C\;and\;T4\;35.9{\pm}2.9^{\circ}C$. Comparing the temperature-distance graph of the first cycle to that of the second cycle revealed the changes of temperature-distance relationship from curve to linear. In group B the temperatures of thermosensors were decreased and maintained throughout the 40 minutes of freezing. On light microscopy, hemorrhagic infarctions of diameter $18.6{\pm}6.4mm$ were found in group A. The infarction size was $14{\pm}3mm$ in group B. No viable cell was found within the infarction area. Conclusion: The conductivity of the lung is changed during the thawing period resulting further decrease in temperature of the lung tissue during the second freezing cycle and expanding the area of cell destruction.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.