• Title/Summary/Keyword: Ratio error

Search Result 2,599, Processing Time 0.03 seconds

A Study on the Spatial Patterns and the Factors on Agglomeration of New Industries in Korea (신산업의 공간분포 패턴과 집적 요인에 관한 연구)

  • Sa, Hoseok
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.23 no.2
    • /
    • pp.125-146
    • /
    • 2020
  • There is an increasing need to foster new industries at the local level. This study aims to analyze the spatial patterns of new industries in Korea from 2007-2017 and to figure out its determinants of agglomeration in 2017. Through this study, it is found that new industries are unevenly distributed around Seoul Metropolitan Area(SMA). The regional disparity between SMA and non-SMA is prominent. Furthermore, new industries represent a strong spatial positive autocorrelation, showing a strong concentration on a few regions in Korea. This study explores the determinants on agglomeration of new industries with spatial statistical model. From the results of spatial error model, it is indicated that the number of graduate students, the ratio of technology based start-ups, and the number of elementary, middle, and high schools have a significant effect on new industries. In addition, the specialization and the diversity of industrial structure on knowledge-based manufacturing industries and knowledge-based service industries have been statistically significant. This study provides implications that non-SMA needs policies with respect to attracting talented people, developing human resources, and improving regional environment in order to improve regional competitiveness in promoting new industries.

The Noise Performance of Diffusion Tensor Image with Different Gradient Schemes (확산 텐서 영상에서 확산 경사자장의 방향수에 따른 잡음 분석)

  • Lee Young-Joo;Chang Yongmin;Kim Yong-Sun
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.6
    • /
    • pp.439-445
    • /
    • 2004
  • Diffusion tensor image(DTI) exploits the random diffusional motion of water molecules. This method is useful for the characterization of the architecture of tissues. In some tissues, such as muscle or cerebral white matter, cellular arrangement shows a strongly preferred direction of water diffusion, i.e., the diffusion is anisotropic. The degree of anisotropy is often represented using diffusion anisotropy indices (relative anisotropy(RA), fractional anisotropy(FA), volume ratio(VR)). In this study, FA images were obtained using different gradient schemes(N=6, 11, 23, 35. 47). Mean values and the standard deviations of FA were then measured at several anatomic locations for each scheme. The results showed that both mean values and the standard deviations of FA were decreased as the number of gradient directions were increased. Also, the standard error of ADC measurement decreased as the number of diffusion gradient directions increased. In conclusion, different gradient schemes showed a significantly different noise performance and the schem with more gradient directions clearly improved the quality of the FA images. But considering acquisition time of image and standard deviation of FA, 23 gradient directions is clinically optimal.

An Analysis of Eating Out Expenditure Behavior of Urban Households by Decile Group (도시가계의 10분위별 외식비 지출행태 분석)

  • Choi, Mun-Yong;Mo, Soo-Won;Lee, Kwang-Bae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.11
    • /
    • pp.7820-7830
    • /
    • 2015
  • Korean households' demand for food consumed away from home is on the steady increase. The ratio of eating-out expenditure of the household income, however, tends to decrease recently irrespective of income groups. This paper, therefore, aims to analyse the food-away-from-home expenditures of salary and wage earners' households by income decile group. The eating-out expenditure is modelled as a function of household income and then estimated using econometric methods such as regression, rolling regression, impulse response, and variance decomposition of forecast error. The regression results indicate that the higher the income decile group is, the lower the income elasticity of eating-out expenditure is, and the high income groups enjoy seasonal eating-out, the low groups do not. The coefficients of dynamic rolling regression are much smaller than those of static one, meaning that households tend to decrease the eating-out expenditure of their income. The impulse response analysis suggests that the eating-out expenditure increase of higher income groups lasts long relative to that of lower income groups. The variance decomposition, also, shows that household income plays much more important role in determining eating-out expenditure at the higher income groups than at the lower income groups.

A Blind Watermarking Algorithm using CABAC for H.264/AVC Main Profile (H.264/AVC Main Profile을 위한 CABAC-기반의 블라인드 워터마킹 알고리즘)

  • Seo, Young-Ho;Choi, Hyun-Jun;Lee, Chang-Yeul;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.181-188
    • /
    • 2007
  • This paper proposed a watermark embedding/extracting method using CABAC(Context-based Adaptive Binary Arithmetic Coding) which is the entropy encoder for the main profile of MPEG-4 Part 10 H.264/AVC. This algorithm selects the blocks and the coefficients in a block on the bases of the contexts extracted from the relationship to the adjacent blocks and coefficients. A watermark bit is embedded without any modification of coefficient or with replacing the LSB(Least Significant Bit) of the coefficient with a watermark bit by considering both the absolute value of the selected coefficient and the watermark bit. Therefore, it makes it hard for an attacker to find out the watermarked locations. By selecting a few coefficients near the DC coefficient according to the contexts, this algorithm satisfies the robustness requirement. From the results from experiments with various kinds and various strengths of attacks the maximum error ratio of the extracted watermark was 5.02% in maximum, which makes certain that the proposed algorithm has very high level of robustness. Because it embeds the watermark during the context modeling and binarization process of CABAC, the additional amount of calculation for locating and selecting the coefficients to embed watermark is very small. Consequently, it is highly expected that it is very useful in the application area that the video must be compressed right after acquisition.

Thin Layer Drying Model of Sorghum

  • Kim, Hong-Sik;Kim, Oui-Woung;Kim, Hoon;Lee, Hyo-Jai;Han, Jae-Woong
    • Journal of Biosystems Engineering
    • /
    • v.41 no.4
    • /
    • pp.357-364
    • /
    • 2016
  • Purpose: This study was performed to define the drying characteristics of sorghum by developing thin layer drying equations and evaluating various grain drying equations. Thin layer drying equations lay the foundation characteristics to establish the thick layer drying equations, which can be adopted to determine the design conditions for an agricultural dryer. Methods: The drying rate of sorghum was measured under three levels of drying temperature ($40^{\circ}C$, $50^{\circ}C$, and $60^{\circ}C$) and relative humidity (30%, 40%, and 50%) to analyze the drying process and investigate the drying conditions. The drying experiment was performed until the weight of sorghum became constant. The experimental constants of four thin layer drying models were determined by developing a non-linear regression model along with the drying experiment results. Result: The half response time (moisture ratio = 0.5) of drying, which is an index of the drying rate, was increased as the drying temperature was high and relative humidity was low. When the drying temperature was $40^{\circ}C$ at a relative humidity (RH) of 50%, the maximum half response time of drying was 2.8 h. Contrastingly, the maximum half response time of drying was 1.2 h when the drying temperature was $60^{\circ}C$ at 30% RH. The coefficient of determination for the Lewis model, simplified diffusion model, Page model, and Thompson model was respectively 0.9976, 0.9977, 0.9340, and 0.9783. The Lewis model and the simplified diffusion model satisfied the drying conditions by showing the average coefficient of determination of the experimental constants and predicted values of the model as 0.9976 and Root Mean Square Error (RMSE) of 0.0236. Conclusion: The simplified diffusion model was the most suitable for every drying condition of drying temperature and relative humidity, and the model for the thin layer drying is expected to be useful to develop the thick layer drying model.

Analysis Technique for Chloride Penetration using Double-layer and Time-Dependent Chloride Diffusion in Concrete (콘크리트내의 이중구조와 시간의존성을 고려한 염화물 해석기법의 개발)

  • Mun, Jin-Man;Kim, Jin-Yeong;Kim, Young-Joon;Oh, Gyeong-Seok;Kwon, Seung-Jun
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.19 no.5
    • /
    • pp.83-91
    • /
    • 2015
  • With varying conditions of concrete surface, induced chloride contents are changed and this is a key parameter for steel corrosion and service life in RC (Reinforced Concrete) structures. Many surface enhancement techniques using impregnation have been developed, however the evaluation techniques for chloride behavior through doubly layered media and time-dependent diffusion are rarely proposed. This paper presents an analysis technique considering double-layer concrete and time-dependent diffusion behavior, and the results are compared with those from the previous test results through reverse analysis. The chloride profiles from the surface-impregnated concrete exposed to atmospheric, tidal, submerged zone for 2 years are adopted. Furthermore surface chloride contents and diffusion coefficients are obtained, and are compared with those from Life365. Through consideration of time effect, the relative error decreases from 0.28 to 0.20 in atmospheric, 0.29 to 0.11 in tidal, and 0.54 to 0.40 in submerged zone, respectively, which shows more reasonable results. Utilizing the diffusion coefficients from Life365, relative errors increases and it needs deeper penetration depth (e) and lower diffusion coefficient ratio ($D_1/D_2$) due to higher diffusion coefficient.

Development and Field Application of Apparatus for Determination of Limit State Design Strength Characteristics in Weathered Ground (한계상태설계법 지반정수 산정을 위한 풍화대 강도특성 측정장치의 개발 및 현장적용에 관한 연구)

  • Kim, Ki Seog;Kim, Jong Hoon;Choi, Sung-oong
    • Tunnel and Underground Space
    • /
    • v.30 no.2
    • /
    • pp.164-179
    • /
    • 2020
  • Applying the limit state design method to geotechnical structures, accuracy and reliability of its design are mainly affected by parameters for geotechnical site characteristics, such as unit weight, Poisson's ratio, deformation modulus, cohesion and frictional angle. When the structures are located in weathered ground, especially, cohesion and frictional angle of ground are closely related with decision of parameters for structures' load and ground's resistance. Therefore, the accurate determination of these parameters, which are commonly obtained from field measurement, such as borehole shear test, are essential for optimum design of geotechnical structures. The 38 case studies, in this study, have been analyzed for understanding the importance of these parameters in designing the ground structures. From these results, importance of field measurement was also ascertained. With these evaluations, an apparatus for determining the strength characteristics, which are fundamental in limit state design (LSD) method, have been newly developed. This apparatus has an improved function as following the ASTM suggestion. Through the field application of this apparatus, the strong point of minimizing the possibility of error occurrence during the measurement has been verified and authors summarized that the essential parameters for LSD can be qualitatively obtained by this apparatus for determination of strength characteristics of weathered ground.

Comparative Evaluation of 18F-FDG Brain PET/CT AI Images Obtained Using Generative Adversarial Network (생성적 적대 신경망(Generative Adversarial Network)을 이용하여 획득한 18F-FDG Brain PET/CT 인공지능 영상의 비교평가)

  • Kim, Jong-Wan;Kim, Jung-Yul;Lim, Han-sang;Kim, Jae-sam
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.24 no.1
    • /
    • pp.15-19
    • /
    • 2020
  • Purpose Generative Adversarial Network(GAN) is one of deep learning technologies. This is a way to create a real fake image after learning the real image. In this study, after acquiring artificial intelligence images through GAN, We were compared and evaluated with real scan time images. We want to see if these technologies are potentially useful. Materials and Methods 30 patients who underwent 18F-FDG Brain PET/CT scanning at Severance Hospital, were acquired in 15-minute List mode and reconstructed into 1,2,3,4,5 and 15minute images, respectively. 25 out of 30 patients were used as learning images for learning of GAN and 5 patients used as verification images for confirming the learning model. The program was implemented using the Python and Tensorflow frameworks. After learning using the Pix2Pix model of GAN technology, this learning model generated artificial intelligence images. The artificial intelligence image generated in this way were evaluated as Mean Square Error(MSE), Peak Signal to Noise Ratio(PSNR), and Structural Similarity Index(SSIM) with real scan time image. Results The trained model was evaluated with the verification image. As a result, The 15-minute image created by the 5-minute image rather than 1-minute after the start of the scan showed a smaller MSE, and the PSNR and SSIM increased. Conclusion Through this study, it was confirmed that AI imaging technology is applicable. In the future, if these artificial intelligence imaging technologies are applied to nuclear medicine imaging, it will be possible to acquire images even with a short scan time, which can be expected to reduce artifacts caused by patient movement and increase the efficiency of the scanning room.

Empirical Analysis on Agent Costs against Ownership Structure in Accordance with Verification of Suitability of the Model (모형의 적합성 검증에 따른 소유구조대비 대리인 비용의 실증분석)

  • Kim, Dae-Lyong;Lim, Kee-Soo;Sung, Sang-Hyeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.8
    • /
    • pp.3417-3426
    • /
    • 2012
  • This study aims to determine how ownership structure (share-holding ratio of insiders, foreigners) affects agent costs (the portion of asset efficiency or non-operating expenses) through empirical analysis. However, as existing studies on correlations between ownership structure and agent costs adopted Pooled OLS Model, this study focused on additionally formulating Fixed Effect Model and Random Effect Model aimed to reflect the time of data formation and corporate effects as study models based on verification results on the suitability of Pooled-OLS Model before comparative analysis for the purpose of improvement of credibility and statistical validity of the results of empirical analysis based on the premise that the Pooled OLS Model is not reliable enough to verify massive panel data. The data has been accumulated over 10 years from 1998 to 2007 after the IMF crisis hit the nation, from a subject 331 companies except for financial institutions. As a result of the empirical analysis, verification of the suitability of model has determined that the Random Effect Model is appropriate in terms of asset efficiency among agent costs items. On the other hand, the Fixed Effect Model is appropriate in terms of non-operating costs. As a result of the empirical analysis according to the appropriate model, no hypothesis adopted in the Pooled OLS Model has been accepted. This suggests that developing an appropriate model is more important than other factors for the purpose of generating statistically significant empirical results by showing that different empirical results are produced according to the type of empirical analysis.

Optimization Model for the Mixing Ratio of Coatings Based on the Design of Experiments Using Big Data Analysis (빅데이터 분석을 활용한 실험계획법 기반의 코팅제 배합비율 최적화 모형)

  • Noh, Seong Yeo;Kim, Young-Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.10
    • /
    • pp.383-392
    • /
    • 2014
  • The research for coatings is one of the most popular and active research in the polymer industry. For the coatings, electronics industry, medical and optical fields are growing more important. In particular, the trend is the increasing of the technical requirements for the performance and accuracy of the coatings by the development of automotive and electronic parts. In addition, the industry has a need of more intelligent and automated system in the industry is increasing by introduction of the IoT and big data analysis based on the environmental information and the context information. In this paper, we propose an optimization model for the design of experiments based coating formulation data objects using the Internet technologies and big data analytics. In this paper, the coating formulation was calculated based on the best data analysis is based on the experimental design, modify the operator with respect to the error caused based on the coating formulation used in the actual production site data and the corrected result data. Further optimization model to correct the reference value by leveraging big data analysis and Internet of things technology only existing coating formulation is applied as the reference data using a manufacturing environment and context information retrieval in color and quality, the most important factor in maintaining and was derived. Based on data obtained from an experiment and analysis is improving the accuracy of the combination data and making it possible to give a LOT shorter working hours per data. Also the data shortens the production time due to the reduction in the delivery time per treatment and It can contribute to cost reduction or the like defect rate reduced. Further, it is possible to obtain a standard data in the manufacturing process for the various models.