• Title/Summary/Keyword: Level set methods

Search Result 1,033, Processing Time 0.04 seconds

A Study on the Determinants of Land Price in a New Town (신도시 택지개발사업지역에서 토지가격 결정요인에 관한 연구)

  • Jeong, Tae Yun
    • Korea Real Estate Review
    • /
    • v.28 no.1
    • /
    • pp.79-90
    • /
    • 2018
  • The purpose of this study was to estimate the pricing factors of residential lands in new cities by estimating the pricing model of residential lands. For this purpose, hedonic equations for each quantile of the conditional distribution of land prices were estimated using quantile regression methods and the sale price date of Jangyu New Town in Gimhae. In this study, a quantile regression method that models the relation between a set of explanatory variables and each quantile of land price was adopted. As a result, the differences in the effects of the characteristics by price quantile were confirmed. The number of years that elapsed after the completion of land construction is the quadratic effect in the model because its impact may give rise to a non-linear price pattern. Age appears to decrease the price until certain years after the construction, and increases the price afterward. In the estimation of the quantile regression, land age appears to have a statistically significant impact on land price at the traditional level, and the turning point appears to be shorter for the low quantiles than for the higher quantiles. The positive effects of the use of land for commercial and residential purposes were found to be the biggest. Land demand is preferred if there are more than two roads on the ground. In this case, the amount of sunshine will improve. It appears that the shape of a square wave is preferred to a free-looking land. This is because the square land is favorable for development. The variables of the land used for commercial and residential purposes have a greater impact on low-priced residential lands. This is because such lands tend to be mostly used for rental housing and have different characteristics from residential houses. Residential land prices have different characteristics depending on the price level, and it is necessary to consider this in the evaluation of the collateral value and the drafting of real estate policy.

Correlation between fish consumption and the risk of mild cognitive impairment in the elderly living in rural areas (농촌지역에 거주하는 노인의 생선 섭취량과 인지기능저하 위험도 간의 상관성)

  • Yu, Areum;Kim, Jihye;Choi, Bo Youl;Kim, Mi Kyung;Yang, Yoonkyoung;Yang, Yoon Jung
    • Journal of Nutrition and Health
    • /
    • v.54 no.2
    • /
    • pp.139-151
    • /
    • 2021
  • Purpose: This study examines the correlation between fish consumption and the risk of mild cognitive impairment in the elderly living in rural areas. Methods: The Yangpyeong cohort data collected from Yangpyeong in July 2009 and August 2010 was used as the data set. Adults greater than or equal to 60 years who have completed the Korean version of the Mini-Mental State Examination (MMSE-KC) were selected for the study. After excluding participants with less than 500 kcal of energy intake (n = 2), a total of 806 adults were enrolled as the final subjects. Cognitive function was assessed using the MMSE-KC, and dietary intake was collected using the quantitative food frequency questionnaire comprising 106 foods or food groups. Results: The educational level, proportion of people who exercise, fruits and vegetable intake, and energy intake, tended to increase with fish intake among men, while increasing age resulted in decreased fish consumption. Among women, the educational level, proportion of subjects who exercise, proportion of subjects currently taking dietary supplements, fruits and vegetable intake, and energy intake, tended to increase with fish consumption, whereas increasing age showed decreasing fish consumption. Increased fish intake resulted in a higher MMSE-KC score after adjusting for the confounding variables in women (p for trend = 0.016), but no significant trend was observed between fish intake and MMSE-KC score in men. Fish intake was inversely related to the risk of mild cognitive impairment after adjusting for covariates in women (Q1 vs. Q4; odds ratio, 0.46 [0.23-0.90]; p for trend = 0.009). Conclusion: This study determined that increased fish consumption is correlated with reduced risk of mild cognitive impairment in the female elderly. Further longitudinal studies with larger samples are required to determine a causal relationship between fish intake and cognitive function.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

A Study on The Introduction Method of Industrial Design for Small Business (중소기업의 산업디자인 도입방법에 관한 연구)

  • 이수봉
    • Archives of design research
    • /
    • v.11 no.2
    • /
    • pp.129-140
    • /
    • 1998
  • This study aimed to grqJe for and present guideline roodel when the qJerator of domestic small manufacturing industry try to introch1ce the first industrial design by easier and more effective method. As the method of study, first of aiL examined the necessary of introducing industrial design throogh coosidering about the role and importance of small business. And next, analysed and examined the result of researching by enquete that is for qJerators of cbnestic small business. As a result, preconditioos for effective introducing industrial design were found. And, based 00 the preconditioos that were found through researching by enquete, examined the approachable introducing methods. Finally, set up the effectivable introducing methods of industrial design for doo1estic small manufacturing industry as a graphical model. As a result of study, First, the operator of small business who try to introduce industrial design needs to be well aware of these six cooditions as a prenise of effective awroach.1) coosciousness of role and versus a nation and a people of own industry Cereative 2) managing coosideratim and examinatim of a necessity of introducing industrial design as a cata1yst 3) A certain understanding aIntt essence and value of industrial design 4) Study and examinatim about a case of sucessful introducing industrial design arxl common introducing method of small business.5) Befarehand examinatim of introducing method making use of professional design organization and consultatim wicket 6) Prodent examination about the appointlrent puprpose, method of designer and infonmtion about designer. Second, as the position of small bnsiness that introduce industrial design fur the first time, it is confirmed that the aroroach going with introducing types - preliminary introducing, partitial introducing, regular introducing, whole industry level introducing - considered necessity rate of introducing industrial design and introducing range at the same time. This method is able to approach step by step, but it is confinmed that there is a characteristic in being able to select the method freely, and understanding easily for being coostructed visual form.

  • PDF

Quantitative Study of Annular Single-Crystal Brain SPECT (원형단일결정을 이용한 SPECT의 정량화 연구)

  • 김희중;김한명;소수길;봉정균;이종두
    • Progress in Medical Physics
    • /
    • v.9 no.3
    • /
    • pp.163-173
    • /
    • 1998
  • Nuclear medicine emission computed tomography(ECT) can be very useful to diagnose early stage of neuronal diseases and to measure theraputic results objectively, if we can quantitate energy metabolism, blood flow, biochemical processes, or dopamine receptor and transporter using ECT. However, physical factors including attenuation, scatter, partial volume effect, noise, and reconstruction algorithm make it very difficult to quantitate independent of type of SPECT. In this study, we quantitated the effects of attenuation and scatter using brain SPECT and three-dimensional brain phantom with and without applying their correction methods. Dual energy window method was applied for scatter correction. The photopeak energy window and scatter energy window were set to 140ke${\pm}$10% and 119ke${\pm}$6% and 100% of scatter window data were subtracted from the photopeak window prior to reconstruction. The projection data were reconstructed using Butterworth filter with cutoff frequency of 0.95cycles/cm and order of 10. Attenuation correction was done by Chang's method with attenuation coefficients of 0.12/cm and 0.15/cm for the reconstruction data without scatter correction and with scatter correction, respectively. For quantitation, regions of interest (ROIs) were drawn on the three slices selected at the level of the basal ganglia. Without scatter correction, the ratios of ROI average values between basal ganglia and background with attenuation correction and without attenuation correction were 2.2 and 2.1, respectively. However, the ratios between basal ganglia and background were very similar for with and without attenuation correction. With scatter correction, the ratios of ROI average values between basal ganglia and background with attenuation correction and without attenuation correction were 2.69 and 2.64, respectively. These results indicate that the attenuation correction is necessary for the quantitation. When true ratios between basal ganglia and background were 6.58, 4.68, 1.86, the measured ratios with scatter and attenuation correction were 76%, 80%, 82% of their true ratios, respectively. The approximate 20% underestimation could be partially due to the effect of partial volume and reconstruction algorithm which we have not investigated in this study, and partially due to imperfect scatter and attenuation correction methods that we have applied in consideration of clinical applications.

  • PDF

Rewetting Strategies for the Drained Tropical Peatlands in Indonesia (인도네시아의 배수된 열대 이탄지에 대한 재습지화 전략)

  • Roh, Yujin;Kim, Seongjun;Han, Seung Hyun;Lee, Jongyeol;Son, Yowhan
    • Korean Journal of Environmental Biology
    • /
    • v.36 no.1
    • /
    • pp.33-42
    • /
    • 2018
  • The tropical peatlands have been deforested and converted to agricultural and plantation areas in Indonesia. To manage water levels and increase the overall productivity of crops, canals have been constructed in tropical peatlands. The canals destructed the structure of the tropical peatlands, and increased the subsidence and fire hazard risks in the region. The Indonesian government enacted regulations and a moratorium on tropical peatlands, in order to reduce degradation. A practical method under the regulations of rewetting tropical peatlands was to permit a canal blocking. In this study, four canal blocking projects were investigated regarding their planning, construction priority, design, building material, construction, monitoring, time and costs associated with the canal blockings. In the protected areas, regulations restricted the development of the tropical peatlands areas that were noted as deeper than 3 m, and the administration stopped issuing new concessions for future work projects for this noted criteria of land use. A noted purpose of canal blockings in these areas was to effectuate the restoration of the lands in the region. The main considerations of the restoration efforts were to maintain a durability of the blockings, and to encourage the participation of the area stakeholders. In the case of a concession area, regulations were set into place to restrict clear-cutting and shifting cultivation, and to maintain groundwater level in the tropical peatland. The most significant priorities identified in the canal blocking project were the efficiency and cost-effectiveness of the project. Nevertheless, the drainage of tropical peatlands has been continued. On the basis of a literature review on regulations and rewetting methods in tropical peatlands of Indonesia, we discussed the improvements of the regulations, and adequate canal blockings to serve the function to rewet the tropical peatlands in Indonesia. Our results would help establishing an adequate direction and recommended guideline on viable rewetting methods for the restoration of drained tropical peatlands in Southeast Asia.

Analysis and Performance Evaluation of Pattern Condensing Techniques used in Representative Pattern Mining (대표 패턴 마이닝에 활용되는 패턴 압축 기법들에 대한 분석 및 성능 평가)

  • Lee, Gang-In;Yun, Un-Il
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.77-83
    • /
    • 2015
  • Frequent pattern mining, which is one of the major areas actively studied in data mining, is a method for extracting useful pattern information hidden from large data sets or databases. Moreover, frequent pattern mining approaches have been actively employed in a variety of application fields because the results obtained from them can allow us to analyze various, important characteristics within databases more easily and automatically. However, traditional frequent pattern mining methods, which simply extract all of the possible frequent patterns such that each of their support values is not smaller than a user-given minimum support threshold, have the following problems. First, traditional approaches have to generate a numerous number of patterns according to the features of a given database and the degree of threshold settings, and the number can also increase in geometrical progression. In addition, such works also cause waste of runtime and memory resources. Furthermore, the pattern results excessively generated from the methods also lead to troubles of pattern analysis for the mining results. In order to solve such issues of previous traditional frequent pattern mining approaches, the concept of representative pattern mining and its various related works have been proposed. In contrast to the traditional ones that find all the possible frequent patterns from databases, representative pattern mining approaches selectively extract a smaller number of patterns that represent general frequent patterns. In this paper, we describe details and characteristics of pattern condensing techniques that consider the maximality or closure property of generated frequent patterns, and conduct comparison and analysis for the techniques. Given a frequent pattern, satisfying the maximality for the pattern signifies that all of the possible super sets of the pattern must have smaller support values than a user-specific minimum support threshold; meanwhile, satisfying the closure property for the pattern means that there is no superset of which the support is equal to that of the pattern with respect to all the possible super sets. By mining maximal frequent patterns or closed frequent ones, we can achieve effective pattern compression and also perform mining operations with much smaller time and space resources. In addition, compressed patterns can be converted into the original frequent pattern forms again if necessary; especially, the closed frequent pattern notation has the ability to convert representative patterns into the original ones again without any information loss. That is, we can obtain a complete set of original frequent patterns from closed frequent ones. Although the maximal frequent pattern notation does not guarantee a complete recovery rate in the process of pattern conversion, it has an advantage that can extract a smaller number of representative patterns more quickly compared to the closed frequent pattern notation. In this paper, we show the performance results and characteristics of the aforementioned techniques in terms of pattern generation, runtime, and memory usage by conducting performance evaluation with respect to various real data sets collected from the real world. For more exact comparison, we also employ the algorithms implementing these techniques on the same platform and Implementation level.

Trends in QA/QC of Phytoplankton Data for Marine Ecosystem Monitoring (해양생태계 모니터링을 위한 식물플랑크톤 자료의 정도 관리 동향)

  • YIH, WONHO;PARK, JONG WOO;SEONG, KYEONG AH;PARK, JONG-GYU;YOO, YEONG DU;KIM, HYUNG SEOP
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.3
    • /
    • pp.220-237
    • /
    • 2021
  • Since the functional importance of marine phytoplankton was firstly advocated from early 1880s massive data on the species composition and abundance were produced by classical microscopic observation and the advanced auto-imaging technologies. Recently, pigment composition resulted from direct chemical analysis of phytoplankton samples or indirect remote sensing could be used for the group-specific quantification, which leads us to more diversified data production methods and for more improved spatiotemporal accessibilities to the target data-gathering points. In quite a few cases of many long-term marine ecosystem monitoring programs the phytoplankton species composition and abundance was included as a basic monitoring item. The phytoplankton data could be utilized as a crucial evidence for the long-term change in phytoplankton community structure and ecological functioning at the monitoring stations. Usability of the phytoplankton data sometimes is restricted by the differences in data producers throughout the whole monitoring period. Methods for sample treatments, analyses, and species identification of the phytoplankton species could be inconsistent among the different data producers and the monitoring years. In-depth study to determine the precise quantitative values of the phytoplankton species composition and abundance might be begun by Victor Hensen in late 1880s. International discussion on the quality assurance of the marine phytoplankton data began in 1969 by the SCOR Working Group 33 of ICSU. Final report of the Working group in 1974 (UNESCO Technical Papers in Marine Science 18) was later revised and published as the UNESCO Monographs on oceanographic methodology 6. The BEQUALM project, the former body of IPI (International Phytoplankton Intercomparison) for marine phytoplankton data QA/QC under ISO standard, was initiated in late 1990. The IPI is promoting international collaboration for all the participating countries to apply the QA/QC standard established from the 20 years long experience and practices. In Korea, however, such a QA/QC standard for marine phytoplankton species composition and abundance data is not well established by law, whereas that for marine chemical data from measurements and analysis has been already set up and managed. The first priority might be to establish a QA/QC standard system for species composition and abundance data of marine phytoplankton, then to be extended to other functional groups at the higher consumer level of marine food webs.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

Effect of Pressure Rise Time on Tidal Volume and Gas Exchange During Pressure Control Ventilation (압력조절환기법에서 압력상승시간(Pressure Rise Time)이 흡기 일환기량 및 가스교환에 미치는 영향)

  • Jeoung, Byung-O;Koh, Youn-Suck;Shim, Tae-Sun;Lee, Sang-Do;Kim, Woo-Sung;Kim, Dong-Soon;Kim, Won-Dong;Lim, Chae-Man
    • Tuberculosis and Respiratory Diseases
    • /
    • v.48 no.5
    • /
    • pp.766-772
    • /
    • 2000
  • Background : Pressure rise time (PRT) is the time in which the ventilator aclieves the set airway pressure in pressure-targeted modes, such as pressure control ventilation (PCV). With varying PRT, in principle, the peak inspiratory flow rate of the ventilator also varies. And if PRT is set to a shorter duration, the effective duration of target pressure level would be prolonged, which in turn would increase inspiratory tidal volume(Vti) and mean airway pressure (Pmean). We also postulated that the increase in Vti with shortening of PRT may relate inversely to the patients' basal airway resistance. Methods : In 13 paralyzed patients on PCV (pressure control 18$\pm$9.5 cm $H_2O$ $FIO_2\;0.6\pm0.3$, PEEP 5$\pm$3 cm $H_2O$, f 20/min, I : E1 : 2) with Servo 300 (Siemens-Elema, Solna, Sweden) from various causes of respiratory failure, PRT of 10 %, 5 % and 0 % were randomly applied. At 30 min of each PRT trial, peak inspiratory flow (PIF, L/sec), Vti (ml), Pmean (cm $H_2O$) and ABGA were determined. Results : At PRT 10%, 5%, and 0%, PIF were 0.69$\pm$0.13, 0.77$\pm$0.19, 0.83$\pm$0.22, respectively (p<0.001). Vti were 425$\pm$94, 439$\pm$101, 456$\pm$106, respectively (p<0.001), and Pmean were 11.2$\pm$3.7, 12.0$\pm$3.7, 12.5$\pm$3.8, respectively (p<0.001). pH were 7.40$\pm$0.08, 7.40$\pm$0.92, 7.41$\pm$0.96, respectively (p=0.00) ; $PaCO_2$ (mm Hg) were 47.4$\pm$15.8, 47.2 $\pm$15.7, 44.6$\pm$16.2, respectively (p=0.004) ; $PAO_2-PaO_2$ (mm Hg) were 220$\pm$98, 224$\pm$95, 227$\pm$94, respectively (p=0.004) ; and $V_n/V_T$ as determined by ($PaCO_2-P_E-CO_2$)/$PaCO_2$ were 0.67$\pm$0.07, 0.67$\pm$0.08, 0.66$\pm$0.08, respectively (p=0.007). The correlation between airway resistance and change of Vti from PRT 10% to 0% were r= -0.243 (p=0.498). Conclusion : Shortening of pressure rise timee during PCV was associated with increased tidal volume, increased mean airway pressure and lower $PaCO_2$.

  • PDF