• Title/Summary/Keyword: large size

Search Result 8,287, Processing Time 0.051 seconds

Determination of the Optimum Sampling Area for the Benthic Community Study of the Songdo Tidal Flat and Youngil Bay Subtidal Sediment (송도 갯벌과 영일만 조하대 저서동물의 군집조사를 위한 적정 채집면적의 결정)

  • Koh, Chul-Hwan;Kang, Seong-Gil;Lee, Chang-Bok
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.4 no.1
    • /
    • pp.63-70
    • /
    • 1999
  • The optimum sampling area which can be applied to the benthic community study is estimated from large survey data in the Songdo tidal flat and subtidal zone of Youngil Bay, Korea. A total of 250 samples by 0.02 $m^2$ box corer for the benthic fauna in Songdo tidal flat and 50 samples by 0.1 $m^2$ van Veen grab in Youngil Bay were taken from the total sampling area of 5 $m^2$. It was assumed that the sampling area could contain sufficient information on sediment fauna, if cumulative number of species, ecological indices, and similarity index by cluster analysis reflect the similarity level of 75% to those found at total sampling area (5 $m^2$). A total of 56 and 60 species occurred from Songdo tidal flat and Youngil Bay, respectively. The cumulative curve of the species number ($N_{sp}$) as a function of the sampling area (A in $m^2$ ) was fitted as $N_{sp}=37.379A^{0.257}$ ($r^2=0.99$) for intertidal fauna and $N_{sp}=40.895A^{0.257}$ ($r^2=0.98$) for subtidal fauna. Based on these curves and 75% of similarity to the total sampling area (5 $m^2$), the optimum sampling area was proposed as 1.6 $m^2$ for the intertidal and 1.5 $m^2$ for the subtidal fauna. Ecological indices (species diversity, richness, evenness and dominance indices) were again calculated on the basis of species composition in differently simulated sample sizes. Changes in ecological indices with these sample sizes indicated that samplings could be done by collecting fauna from < 0.5 $m^2$-1.5 $m^2$ on the Songdo tidal flat and from < 0.5 $m^2$-1.2 $m^2$ in Youngil Bay. Changes in similarity level of all units of each simulated sample size showed that sampling area of 0.3 $m^2$ (Songdo tidal flat) and 0.6 $m^2$ (Youngil Bay) should be taken to obtain a similarity level of 75%. In conclusion, sampling area which was determined by cumulative number of species, ecological indices and similarity index by cluster analysis could be determined as 1.5 $m^2$ (0.02 $m^2$ box corer, n=75) for Songdo tidal flat and 1.2 $m^2$ (0.1 $m^2$ van Veen grab, n=12) for Youngil Bay. If these sampling areas could be covered in the field survey, population densities of seven dominant species comprising 68% of the total faunal abundance occurring on Songdo tidal flat and six species comprising 90% in Youngil Bay can be estimated at the precision level of P=0.2.

  • PDF

Inhomogeneity correction in on-line dosimetry using transmission dose (투과선량을 이용한 온라인 선량측정에서 불균질조직에 대한 선량 보정)

  • Wu, Hong-Gyun;Huh, Soon-Nyung;Lee, Hyoung-Koo;Ha, Sung-Whan
    • Journal of Radiation Protection and Research
    • /
    • v.23 no.3
    • /
    • pp.139-147
    • /
    • 1998
  • Purpose: Tissue inhomogeneity such as lung affects tumor dose as well as transmission dose in new concept of on-line dosimetry which estimates tumor dose from transmission dose using the new algorithm. This study was carried out to confirm accuracy of correction by tissue density in tumor dose estimation utilizing transmission dose. Methods: Cork phantom (CP, density $0.202\;gm/cm^3$) having similar density with lung parenchyme and polystyrene phantom (PP, density $1.040\;gm/cm^3$) having similar density with soft tissue were used. Dose measurement was carried out under condition simulating human chest. On simulating AP-PA irradiation, PPs with 3 cm thickness were placed above and below CP, which had thickness of 5, 10, and 20 cm. On simulating lateral irradiation, 6 cm thickness of PP was placed between two 10 cm thickness CPs additional 3 cm thick PP was placed to both lateral sides. 4, 6, and 10 MV x-ray were used. Field size was in the range of $3{\times}3$ cm through $20{\times}20$ cm, and phantom-chamber distance (PCD) was 10 to 50 cm. Above result was compared with another sets of data with equivalent thickness of PP which was corrected by density. Result: When transmission dose of PP was compared with equivalent thickness of CP which was corrected with density, the average error was 0.18 (${\pm}0.27$) % for 4 MV, 0.10 (${\pm}0.43$) % for 6 MV, and 0.33 (${\pm}0.30$) % for 10 MV with CP having thickness of 5 cm. When CP was 10 cm thick, the error was 0.23 (${\pm}0.73$) %, 0.05 (${\pm}0.57$) %, and 0.04 (${\pm}0.40$) %, while for 20 cm, error was 0.55 (${\pm}0.36$) %, 0.34 (${\pm}0.27$) %, and 0.34 (${\pm}0.18$) % for corresponding energy. With lateral irradiation model, difference was 1.15 (${\pm}1.86$) %, 0.90 (${\pm}1.43$) %, and 0.86 (${\pm}1.01$) % for corresponding energy. Relatively large difference was found in case of PCD having value of 10 cm. Omitting PCD with 10 cm, the difference was reduced to 0.47 (${\pm}$1.17) %, 0.42 (${\pm}$0.96) %, and 0.55 (${\pm}$0.77) % for corresponding energy. Conclusion When tissue inhomogeneity such as lung is in tract of x-ray beam, tumor dose could be calculated from transmission dose after correction utilizing tissue density.

  • PDF

Anatomical Studies on Tumorous Tissue Formed in a Stem of Ailanthus altissima Swingle by Artificial Banding and Its Subsequent Removing Treatment -Characters of Individual Elements- (인위적(人爲的)인 밴드결체(結締) 및 해체처리(解締處理)로 형성(形成)된 가죽나무(Ailanthus altissima Swingle) 수간(樹幹)의 종양조직(腫瘍組織)에 관한 해부학적(解剖學的) 연구(硏究) -조직(組織) 구성세포(構成細胞)의 특성(特性)-)

  • Eom, Young Geun;Lee, Phil Woo
    • Journal of Korean Society of Forest Science
    • /
    • v.78 no.3
    • /
    • pp.287-301
    • /
    • 1989
  • A tree of Ailanthus altissima Swingle was fastened with a plastic band, 19mm wide, around the stem 180cm above ground level and was left to grow under this condition for one year, By removal of this band the tumorous tissue gradually developed and the tree bearing distinct tumorous tissue, an overgrowth surrounding the stem, was harvested two years after the band removal. For the investigation of this tumorous part and its comparison with adjacent normal parts in the anatomical features of individual elements, the tumorous part and parts directly and 40cm above and below the tumorous part were obtained from the tree. The tumor wood having remarkably wider growth increment occurred in the 3rd growth ring the first year after removal of the fastened band, and the barrier zone which delimited the discolored wood from the normal-colored wood inwards appeared u1 the intra-2nd growth ring produced during the fastened period in the tumorous part and the false ring-like zones equivalent to barrier Zone were shown in the normal-colored 2nd growth rings of the parts directly and 40cm above and below the tumorous part, as well. The tumor wood, the 3rd growth ring, and proportion of the 2nd growth ring formed after barrier zone in the tumorous part shared common characteristics in the irregular growth ring boundary, misshapen and shorter individual fibers and vessel elements, and large ray widths and heights. The springwood pores were smaller in diameter in the tumor wood, and the larger radial and smaller tangential diameters of summerwood solitary pores and individual pores consisting of pore multiples in proportion of the 2nd growth ring formed after the barrier zone were transformed into near-isodiametric in the tumor wood, the 3rd growth ring, in the tumorous part. Only in proportion of the 2nd growth ring formed after the barrier zone were transformed into near-isodiametric in the tumor wood, the 3rd growth ring, in the tumorous part, ray densities greatly increased. And the massive tumor wood was caused not by cell size but by cell number because the radial and tangential diameters of fibers in the tumor wood, the 3rd growth ring, in the tumorous part were not sufficiently different from those in the same aged growth rings of the directly and 40cm above and below the tumorous part.

  • PDF

Diagnostic Efficacy of FDG-PET Imaging in Solitary Pulmonary Nodule (고립성폐결절의 진단시 FDG-PET의 임상적 유용성에 관한 연구)

  • Cheon, Eun Mee;Kim, Byung-Tae;Kwon, O. Jung;Kim, Hojoong;Chung, Man Pyo;Rhee, Chong H.;Han, Yong Chol;Lee, Kyung Soo;Shim, Young Mog;Kim, Jhingook;Han, Jungho
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.6
    • /
    • pp.882-893
    • /
    • 1996
  • Background : Over one-third of solitary pulmonary nodules are malignant, but most malignant SPNs are in the early stages at diagnosis and can be cured by surgical removal. Therefore, early diagnosis of malignant SPN is essential for the lifesaving of the patient. The incidence of pulmonary tuberculosis in Korea is somewhat higher than those of other countries and a large number of SPNs are found to be tuberculoma. Most primary physicians tend to regard newly detected solitary pulmonary nodule as tuberculoma with only noninvasive imaging such as CT and they prefer clinical observation if the findings suggest benignancy without further invasive procedures. Many kinds of noninvasive procedures for confirmatory diagnosis have been introduced to differentiate malignant SPNs from benign ones, but none of them has been satisfactory. FOG-PET is a unique tool for imaging and quantifying the status of glucose metabolism. On the basis that glucose metabolism is increased in the malignant transfomled cells compared with normal cells, FDG-PET is considered to be the satisfactory noninvasive procedure which can differentiate malignant SPNs from benign SPNs. So we performed FOG-PET in patients with solitary pulmonary nodule and evaluated the diagnostic accuracy in the diagnosis of malignant SPNs. Method : 34 patients with a solitary pulmonary nodule less than 6 cm of irs diameter who visited Samsung Medical Center from Semptember, 1994 to Semptember, 1995 were evaluated prospectively. Simple chest roentgenography, chest computer tomography, FOG-PET scan were performed for all patients. The results of FOG-PET were evaluated comparing with the results of final diagnosis confirmed by sputum study, PCNA, fiberoptic bronchoscopy, or thoracotomy. Results : (I) There was no significant difference in nodule size between malignant (3.1 1.5cm) and benign nodule(2.81.0cm)(p>0.05). (2) Peal SUV(standardized uptake value) of malignant nodules (6.93.7) was significantly higher than peak SUV of benign nodules(2.71.7) and time-activity curves showed continuous increase in malignant nodules. (3) Three false negative cases were found among eighteen malignant nodule by the FDG-PET imaging study and all three cases were nonmucinous bronchioloalveolar carcinoma less than 2 em diameter. (4) FOG-PET imaging resulted in 83% sensitivity, 100% specificity, 100% positive predictive value and 84% negative predictive value. Conclusion: FOG-PET imaging is a new noninvasive diagnostic method of solitary pulmonary nodule thai has a high accuracy of differential diagnosis between malignant and benign nodule. FDG-PET imaging could be used for the differential diagnosis of SPN which is not properly diagnosed with conventional methods before thoracotomy. Considering the high accuracy of FDG-PET imaging, this procedure may play an important role in making the dicision to perform thoracotomy in diffcult cases.

  • PDF

Case Analysis of the Promotion Methodologies in the Smart Exhibition Environment (스마트 전시 환경에서 프로모션 적용 사례 및 분석)

  • Moon, Hyun Sil;Kim, Nam Hee;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.171-183
    • /
    • 2012
  • In the development of technologies, the exhibition industry has received much attention from governments and companies as an important way of marketing activities. Also, the exhibitors have considered the exhibition as new channels of marketing activities. However, the growing size of exhibitions for net square feet and the number of visitors naturally creates the competitive environment for them. Therefore, to make use of the effective marketing tools in these environments, they have planned and implemented many promotion technics. Especially, through smart environment which makes them provide real-time information for visitors, they can implement various kinds of promotion. However, promotions ignoring visitors' various needs and preferences can lose the original purposes and functions of them. That is, as indiscriminate promotions make visitors feel like spam, they can't achieve their purposes. Therefore, they need an approach using STP strategy which segments visitors through right evidences (Segmentation), selects the target visitors (Targeting), and give proper services to them (Positioning). For using STP Strategy in the smart exhibition environment, we consider these characteristics of it. First, an exhibition is defined as market events of a specific duration, which are held at intervals. According to this, exhibitors who plan some promotions should different events and promotions in each exhibition. Therefore, when they adopt traditional STP strategies, a system can provide services using insufficient information and of existing visitors, and should guarantee the performance of it. Second, to segment automatically, cluster analysis which is generally used as data mining technology can be adopted. In the smart exhibition environment, information of visitors can be acquired in real-time. At the same time, services using this information should be also provided in real-time. However, many clustering algorithms have scalability problem which they hardly work on a large database and require for domain knowledge to determine input parameters. Therefore, through selecting a suitable methodology and fitting, it should provide real-time services. Finally, it is needed to make use of data in the smart exhibition environment. As there are useful data such as booth visit records and participation records for events, the STP strategy for the smart exhibition is based on not only demographical segmentation but also behavioral segmentation. Therefore, in this study, we analyze a case of the promotion methodology which exhibitors can provide a differentiated service to segmented visitors in the smart exhibition environment. First, considering characteristics of the smart exhibition environment, we draw evidences of segmentation and fit the clustering methodology for providing real-time services. There are many studies for classify visitors, but we adopt a segmentation methodology based on visitors' behavioral traits. Through the direct observation, Veron and Levasseur classify visitors into four groups to liken visitors' traits to animals (Butterfly, fish, grasshopper, and ant). Especially, because variables of their classification like the number of visits and the average time of a visit can estimate in the smart exhibition environment, it can provide theoretical and practical background for our system. Next, we construct a pilot system which automatically selects suitable visitors along the objectives of promotions and instantly provide promotion messages to them. That is, based on the segmentation of our methodology, our system automatically selects suitable visitors along the characteristics of promotions. We adopt this system to real exhibition environment, and analyze data from results of adaptation. As a result, as we classify visitors into four types through their behavioral pattern in the exhibition, we provide some insights for researchers who build the smart exhibition environment and can gain promotion strategies fitting each cluster. First, visitors of ANT type show high response rate for promotion messages except experience promotion. So they are fascinated by actual profits in exhibition area, and dislike promotions requiring a long time. Contrastively, visitors of GRASSHOPPER type show high response rate only for experience promotion. Second, visitors of FISH type appear favors to coupon and contents promotions. That is, although they don't look in detail, they prefer to obtain further information such as brochure. Especially, exhibitors that want to give much information for limited time should give attention to visitors of this type. Consequently, these promotion strategies are expected to give exhibitors some insights when they plan and organize their activities, and grow the performance of them.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Light and Electron Microscopy of Gill and Kidney on Adaptation of Tilapia(Oreochromis niloticus) in the Various Salinities (틸라피아의 해수순치시(海水馴致時) 아가미와 신장(腎臟)의 광학(光學) 및 전자현미경적(電子顯微鏡的) 관찰(觀察))

  • Yoon, Jong-Man;Cho, Kang-Yong;Park, Hong-Yang
    • Applied Microscopy
    • /
    • v.23 no.2
    • /
    • pp.27-40
    • /
    • 1993
  • This study was taken to examine the light microscopic and ultrastructural changes of gill and kidney of female tilapia{Oreochromis niloticus) adapted in 0%o, 10%o, 20%o, and 30%o salt concentrations, respectively, by light, scanning and transmission electron microscope. The results obtained in these experiments were summarized as follows: Gill chloride cell hyperplasia, gill lamellar epithelial separation, kidney glomerular shrinkage, blood congestion in kidneys and deposition of hyalin droplets in kidney glomeruli, tubules were the histological alterations in Oreochromis niloticus. Incidence and severity of gill chloride cell hyperplasia rapidly increased together with increase of salinity, and the number of chloride cells in gill lamellae rapidly increased in response to high external NaCl concentrations. The ultrastructure by scanning electron microscope(SEM) indicated that the gill secondary lamella of tilapia(Oreochromis niloticus) exposed to seawater, were characterized by rough convoluted surfaces during the adaptation. Transmission electron microscopy(TEM) indicated that mitochondria in chloride cells exposed to seawater, were both large and elongate and contained well-developed cristae. TEM also showed the increased chloride cells exposed to seawater. The presence of two mitochondria-rich cell types is discussed with regard to their possible role in the hypoosmoregulatory changes which occur during seawater-adaptation. Most Oreochromis niloticus adapted in seawater had an occasional glomerulus completely filling Bowman's capsule in kidney, and glomerular shrinkage was occurred higher in kidney tissues of individuals living in 10%o, 20%o, 30%o of seawater than in those living in 0%o of freshwater, and blood congestion was occurred severer in kidney tissues of individuals living 20%o, 30%o of seawater than in those living in 10%o of seawater. There were decreases in the glomerular area and the nuclear area in the main segments of the nephron, and that the nuclear areas of the nephron cells in seawater-adapted tilapia were of smaller size than those from freshwater-adapted fish. Our findings demonstrated that Oreochromis niloticus tolerated moderately saline environment and the increased body weight living in 30%o was relatively higher than that living in 10%o in spite of histopathological changes.

  • PDF

Analysis on Factors Influencing Welfare Spending of Local Authority : Implementing the Detailed Data Extracted from the Social Security Information System (지방자치단체 자체 복지사업 지출 영향요인 분석 : 사회보장정보시스템을 통한 접근)

  • Kim, Kyoung-June;Ham, Young-Jin;Lee, Ki-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.141-156
    • /
    • 2013
  • Researchers in welfare services of local government in Korea have rather been on isolated issues as disables, childcare, aging phenomenon, etc. (Kang, 2004; Jung et al., 2009). Lately, local officials, yet, realize that they need more comprehensive welfare services for all residents, not just for above-mentioned focused groups. Still cases dealt with focused group approach have been a main research stream due to various reason(Jung et al., 2009; Lee, 2009; Jang, 2011). Social Security Information System is an information system that comprehensively manages 292 welfare benefits provided by 17 ministries and 40 thousand welfare services provided by 230 local authorities in Korea. The purpose of the system is to improve efficiency of social welfare delivery process. The study of local government expenditure has been on the rise over the last few decades after the restarting the local autonomy, but these studies have limitations on data collection. Measurement of a local government's welfare efforts(spending) has been primarily on expenditures or budget for an individual, set aside for welfare. This practice of using monetary value for an individual as a "proxy value" for welfare effort(spending) is based on the assumption that expenditure is directly linked to welfare efforts(Lee et al., 2007). This expenditure/budget approach commonly uses total welfare amount or percentage figure as dependent variables (Wildavsky, 1985; Lee et al., 2007; Kang, 2000). However, current practice of using actual amount being used or percentage figure as a dependent variable may have some limitation; since budget or expenditure is greatly influenced by the total budget of a local government, relying on such monetary value may create inflate or deflate the true "welfare effort" (Jang, 2012). In addition, government budget usually contain a large amount of administrative cost, i.e., salary, for local officials, which is highly unrelated to the actual welfare expenditure (Jang, 2011). This paper used local government welfare service data from the detailed data sets linked to the Social Security Information System. The purpose of this paper is to analyze the factors that affect social welfare spending of 230 local authorities in 2012. The paper applied multiple regression based model to analyze the pooled financial data from the system. Based on the regression analysis, the following factors affecting self-funded welfare spending were identified. In our research model, we use the welfare budget/total budget(%) of a local government as a true measurement for a local government's welfare effort(spending). Doing so, we exclude central government subsidies or support being used for local welfare service. It is because central government welfare support does not truly reflect the welfare efforts(spending) of a local. The dependent variable of this paper is the volume of the welfare spending and the independent variables of the model are comprised of three categories, in terms of socio-demographic perspectives, the local economy and the financial capacity of local government. This paper categorized local authorities into 3 groups, districts, and cities and suburb areas. The model used a dummy variable as the control variable (local political factor). This paper demonstrated that the volume of the welfare spending for the welfare services is commonly influenced by the ratio of welfare budget to total local budget, the population of infants, self-reliance ratio and the level of unemployment factor. Interestingly, the influential factors are different by the size of local government. Analysis of determinants of local government self-welfare spending, we found a significant effect of local Gov. Finance characteristic in degree of the local government's financial independence, financial independence rate, rate of social welfare budget, and regional economic in opening-to-application ratio, and sociology of population in rate of infants. The result means that local authorities should have differentiated welfare strategies according to their conditions and circumstances. There is a meaning that this paper has successfully proven the significant factors influencing welfare spending of local government in Korea.

Quality Assurance for Intensity Modulated Radiation Therapy (세기조절방사선치료(Intensity Modulated Radiation Therapy; IMRT)의 정도보증(Quality Assurance))

  • Cho Byung Chul;Park Suk Won;Oh Do Hoon;Bae Hoonsik
    • Radiation Oncology Journal
    • /
    • v.19 no.3
    • /
    • pp.275-286
    • /
    • 2001
  • Purpose : To setup procedures of quality assurance (OA) for implementing intensity modulated radiation therapy (IMRT) clinically, report OA procedures peformed for one patient with prostate cancer. Materials and methods : $P^3IMRT$ (ADAC) and linear accelerator (Siemens) with multileaf collimator are used to implement IMRT. At first, the positional accuracy, reproducibility of MLC, and leaf transmission factor were evaluated. RTP commissioning was peformed again to consider small field effect. After RTP recommissioning, a test plan of a C-shaped PTV was made using 9 intensity modulated beams, and the calculated isocenter dose was compared with the measured one in solid water phantom. As a patient-specific IMRT QA, one patient with prostate cancer was planned using 6 beams of total 74 segmented fields. The same beams were used to recalculate dose in a solid water phantom. Dose of these beams were measured with a 0.015 cc micro-ionization chamber, a diode detector, films, and an array detector and compared with calculated one. Results : The positioning accuracy of MLC was about 1 mm, and the reproducibility was around 0.5 mm. For leaf transmission factor for 10 MV photon beams, interleaf leakage was measured $1.9\%$ and midleaf leakage $0.9\%$ relative to $10\times\;cm^2$ open filed. Penumbra measured with film, diode detector, microionization chamber, and conventional 0.125 cc chamber showed that $80\~20\%$ penumbra width measured with a 0.125 cc chamber was 2 mm larger than that of film, which means a 0.125 cc ionization chamber was unacceptable for measuring small field such like 0.5 cm beamlet. After RTP recommissioning, the discrepancy between the measured and calculated dose profile for a small field of $1\times1\;cm^2$ size was less than $2\%$. The isocenter dose of the test plan of C-shaped PTV was measured two times with micro-ionization chamber in solid phantom showed that the errors upto $12\%$ for individual beam, but total dose delivered were agreed with the calculated within $2\%$. The transverse dose distribution measured with EC-L film was agreed with the calculated one in general. The isocenter dose for the patient measured in solid phantom was agreed within $1.5\%$. On-axis dose profiles of each individual beam at the position of the central leaf measured with film and array detector were found that at out-of-the-field region, the calculated dose underestimates about $2\%$, at inside-the-field the measured one was agreed within $3\%$, except some position. Conclusion : It is necessary more tight quality control of MLC for IMRT relative to conventional large field treatment and to develop QA procedures to check intensity pattern more efficiently. At the conclusion, we did setup an appropriate QA procedures for IMRT by a series of verifications including the measurement of absolute dose at the isocenter with a micro-ionization chamber, film dosimetry for verifying intensity pattern, and another measurement with an array detector for comparing off-axis dose profile.

  • PDF