• Title/Summary/Keyword: Value Engineering

Search Result 19,805, Processing Time 0.064 seconds

The research for the yachting development of Korean Marina operation plans (요트 발전을 위한 한국형 마리나 운영방안에 관한 연구)

  • Jeong Jong-Seok;Hugh Ihl
    • Journal of Navigation and Port Research
    • /
    • v.28 no.10 s.96
    • /
    • pp.899-908
    • /
    • 2004
  • The rise of income and introduction of 5 day a week working system give korean people opportunities to enjoy their leisure time. And many korean people have much interest in oceanic sports such as yachting and also oceanic leisure equipments. With the popularization and development of the equipments, the scope of oceanic activities has been expanding in Korea just as in the advanced oceanic countries. However, The current conditions for the sports in Korea are not advanced and even worse than underdeveloped countries. In order to develop the underdeveloped resources of Korean marina, we need to customize the marina models of advanced nations to serve the specific needs and circumstances of Korea As such we have carried out a comparative analysis of how Austrailia, Newzealand, Singapore, japan and Malaysia operate their marina, reaching the following conclusions. Firstly, in marina operations, in order to protect personal property rights and to preserve the environment, we must operate membership and non-membership, profit and non-profit schemes separately, yet without regulating the dress code entering or leaving the club house. Secondly, in order to accumulate greater value added, new sporting events should be hosted each year. There is also the need for an active use of volunteers, the generation of greater interest in yacht tourism, and the simplification of CIQ procedures for foreign yachts as well as the provision of language services. Thirdly, a permanent yacht school should be established, and classes should be taught by qualified instructors. Beginners, intermediary, and advanced learner classes should be managed separately with special emphasis on the dinghy yacht program for children. Fourthly, arrival and departure at the moorings must be regulated autonomically, and there must be systematic measures for the marina to be able, in part, to compensate for loss and damages to equipment, security and surveillance after usage fees have been paid for. Fifthly, marine safety personnel must be formed in accordance with Korea's current circumstances from civilian organizations in order to be used actively in benchmarking, rescue operations, and oceanic searches at times of disaster at sea.

Opportunity Tree Framework Design For Optimization of Software Development Project Performance (소프트웨어 개발 프로젝트 성능의 최적화를 위한 Opportunity Tree 모델 설계)

  • Song Ki-Won;Lee Kyung-Whan
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.417-428
    • /
    • 2005
  • Today, IT organizations perform projects with vision related to marketing and financial profit. The objective of realizing the vision is to improve the project performing ability in terms of QCD. Organizations have made a lot of efforts to achieve this objective through process improvement. Large companies such as IBM, Ford, and GE have made over $80\%$ of success through business process re-engineering using information technology instead of business improvement effect by computers. It is important to collect, analyze and manage the data on performed projects to achieve the objective, but quantitative measurement is difficult as software is invisible and the effect and efficiency caused by process change are not visibly identified. Therefore, it is not easy to extract the strategy of improvement. This paper measures and analyzes the project performance, focusing on organizations' external effectiveness and internal efficiency (Qualify, Delivery, Cycle time, and Waste). Based on the measured project performance scores, an OT (Opportunity Tree) model was designed for optimizing the project performance. The process of design is as follows. First, meta data are derived from projects and analyzed by quantitative GQM(Goal-Question-Metric) questionnaire. Then, the project performance model is designed with the data obtained from the quantitative GQM questionnaire and organization's performance score for each area is calculated. The value is revised by integrating the measured scores by area vision weights from all stakeholders (CEO, middle-class managers, developer, investor, and custom). Through this, routes for improvement are presented and an optimized improvement method is suggested. Existing methods to improve software process have been highly effective in division of processes' but somewhat unsatisfactory in structural function to develop and systemically manage strategies by applying the processes to Projects. The proposed OT model provides a solution to this problem. The OT model is useful to provide an optimal improvement method in line with organization's goals and can reduce risks which may occur in the course of improving process if it is applied with proposed methods. In addition, satisfaction about the improvement strategy can be improved by obtaining input about vision weight from all stakeholders through the qualitative questionnaire and by reflecting it to the calculation. The OT is also useful to optimize the expansion of market and financial performance by controlling the ability of Quality, Delivery, Cycle time, and Waste.

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.

Swell Effect Correction for the High-resolution Marine Seismic Data (고해상 해저 탄성파 탐사자료에 대한 너울영향 보정)

  • Lee, Ho-Young;Koo, Nam-Hyung;Kim, Wonsik;Kim, Byoung-Yeop;Cheong, Snons;Kim, Young-Jun
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.4
    • /
    • pp.240-249
    • /
    • 2013
  • The seismic data quality of marine geological and engineering survey deteriorates because of the sea swell. We often conduct a marine survey when the swell height is about 1 ~ 2 m. The swell effect correction is required to enhance the horizontal continuity of seismic data and satisfy the resolution less than 1 m. We applied the swell correction to the 8 channel high-resolution airgun seismic data and 3.5 kHz subbottom profiler (SBP) data. The correct sea bottom detection is important for the swell correction. To detect the sea bottom, we used maximum amplitude of seismic signal around the expected sea bottom, and picked the first increasing point larger than threshold value related with the maximum amplitude. To find sea bottom easily in the case of the low quality data, we transformed the input data to envelope data or the cross-correlated data using the sea bottom wavelet. We averaged the picked sea bottom depths and calculated the correction values. The maximum correction of the airgun data was about 0.8 m and the maximum correction of two kinds of 3.5 kHz SBP data was 0.5 m and 2.0 m respectively. We enhanced the continuity of the subsurface layer and produced the high quality seismic section using the proper methods of swell correction.

Optimization for Solid Culture of Phellinus sp. by Response Surface Methodology (반응표면방법에 의한 Phellinus sp. 고체배양의 최적화)

  • Kang, Tae-Su;Kang, An-Seok;Sohn, Hyung-Rac;Kang, Mi-Sun;Lim, Yaung-Iee;Lee, Shin-Young;Jung, Sung-Mo
    • The Korean Journal of Mycology
    • /
    • v.26 no.2 s.85
    • /
    • pp.265-274
    • /
    • 1998
  • This study was carried out to obtain the basic data for an artificial cultivation of Phellinus sp.. The optimum conditions for the mycelial growth on the different sawdusts (Quercus aliena, Morns alba and Alnus japonica) substrate of an isolated Phellinus sp. were optimized by response surface methodology. The ratio of rice bran addition to sawdust and the suitable moisture content for the mycelial growth in the all sawdust media were about 30% (w/w) and $65{\sim}70%$ (w/v), respectively. The initial pHs for the mycelial growth of Quercus aliena and Morns alba were in the range of $pH\;5{\sim}6$, whereas Alnus japonica was obtained at pH 6. The optimum temperature for the mycelial growth was about $25{\sim}30^{\circ}C$, depending on the different kinds of wood substrates. From the response surface analysis, the values of independent variables of Quercus aliena at stationary points were determined to be 31.01 % (w/w) of rice bran, pH of 5.31 and 69.03% (w/v) of moisture content, and the expected value of mycelial growth was about 8.32 cm. Both the ratio of rice bran addition to sawdust $(X_1)$ and moisture content $(X_3)$ were effective to the mycelial growth. In the case of Morns alba, the ratio of rice bran addition to sawdust, initial pH and moisture content at the stationary points were 28.77% (w/w), 5.28 and 69.8 (w/v),respectively, and the expected mycelial growth of 7.60 cm was obtained. Stationary points for the mycelial growth in the sawdust media of Alnus japonica were 28.74% (w/w) of rice bran, pH of 6. 04 and 66.96% (w/v) of moisture content, and the expected values of mycelial growth was about 5.38 cm. Based on the above results, there was correlations between the mycelial growth and independent variables, and the effect of rice bran $(X_1)$ and initial pH $(X_2)$ for the mycelial growth were higher than the moisture content $(X_3)$. The optimum species of sawdust media for the my celial growth of Phellinus sp. was in the order of Quercus aliena > Morns alba > Alnus japonica.

  • PDF

Biocompatibility of Tissue-Engineered Heart Valve Leaflets Based on Acellular Xenografts (세포를 제거한 이종 심장 판막 이식편을 사용한 조직공학 심장 판막첨의 생체 적합성에 대한 연구)

  • 이원용;성상현;김원곤
    • Journal of Chest Surgery
    • /
    • v.37 no.4
    • /
    • pp.297-306
    • /
    • 2004
  • Current artificial heart valves have several disadvantages, such as thromboembolism, limited durability, infection, and inability to grow. The solution to these problems would be to develop a tissue-engineered heart valves containing autologous cells. The aim of this study was to optimize the protocol to obtain a porcine acellular matrix and seed goat autologous endothelial cells on it, and to evaluate the biological responses of xenograft and xeno-autograft heart valves in goats. Material and Method: Fresh porcine pulmonic valves were treated with one method among 3 representative decellularization protocols (Triton-X, freeze-thawing, and NaCl-SDS). Goat venous endothelial cells were isolated and seeded onto the acellularized xenograft leaflets. Microscopic examinations were done to select the most effective method of decellularizing xenogeneic cells and seeding autologous endothelial cells. Two pulmonic valve leaflets of. 6 goats were replaced by acellularized porcine leaflets with or without seeding autologous endothelial cells while on cardiopulmonary bypass. Goats were sacrificed electively at 6 hours, 1 day, 1 week, 1 month, 3 months, and 6. months after operation. Morphologic examinations were done to see the biological responses of replaced valve leaflets. Result: The microscopic examinations showed that porcine cells were almost completely removed in the leaflets treated with NaCl-SDS. The seeded endothelial cells were more evenly preserved in NaCl-SDS treatment. All 6 goats survived the operation without complications. The xeno- autografts and xenografts showed the appearance, the remodeling process, and the cellular functions of myofibroblasts, 1 day, 1 month, and 3 months after operation, respectively. They were compatible with the native pulmonary leaflet (control group) except for the increased cellularity at 6 months. The xenografts revealed the new endothelial cell lining at that time. Conclusion: Treatment with NaCl-SDS was most effective in obtaining decellularized xenografts and facilitate seeding autologous endothelial cells. The xenografts and xeno-autografts were repopulated with myofibroblasts and endothelial cells in situ serially. Both of grafts served as a matrix for a tissue engineered heart valve and developed into autologous tissue for 6 months.

Growth and optical conductivity properties for MnAl2S4 single crystal thin film by hot wall epitaxy method (Hot Wall Epitaxy(HWE)법에 의한 MnAl2S4 단결정 박막 성장과 광전도 특성)

  • You, Sangha;Lee, Kijeong;Hong, Kwangjoon;Moon, Jongdae
    • Journal of the Korean Crystal Growth and Crystal Technology
    • /
    • v.24 no.6
    • /
    • pp.229-236
    • /
    • 2014
  • A stoichiometric mixture of evaporating materials for $MnAl_2S_4$ single crystal thin films was prepared from horizontal electric furnace. To obtain the single crystal thin films, $MnAl_2S_4$ mixed crystal was deposited on thoroughly etched semi-insulating GaAs(100) substrate by the Hot Wall Epitaxy (HWE) system. The source and substrate temperatures were $630^{\circ}C$ and $410^{\circ}C$, respectively. The crystalline structure of the single crystal thin films was investigated by the photoluminescence and double crystal X-ray diffraction (DCXD). The temperature dependence of the energy band gap of the $MnAl_2S_4$ obtained from the absorption spectra was well described by the Varshni's relation, $E_g(T)=3.7920eV-5.2729{\times}10^{-4}eV/K)T^2/(T+786 K)$. In order to explore the applicability as a photoconductive cell, we measured the sensitivity (${\gamma}$), the ratio of photocurrent to dark current (pc/dc), maximum allowable power dissipation (MAPD) and response time. The results indicated that the photoconductive characteristic were the best for the samples annealed in S vapour compare with in Mn, Al, air and vacuum vapour. Then we obtained the sensitivity of 0.93, the value of pc/dc of $1.10{\times}10^7$, the MAPD of 316 mW, and the rise and decay time of 14.8 ms and 12.1 ms, respectively.

Quantitative Analysis of GBCA Reaction by Mol Concentration Change on MRI Sequence (MRI sequence에 따른 GBCA 몰농도별 반응에 대한 정량적 분석)

  • Jeong, Hyun Keun;Jeong, Hyun Do;Kim, Ho Chul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.2
    • /
    • pp.182-192
    • /
    • 2015
  • In this paper, we introduce how to change the reaction rate as mol concentration when we scan enhanced MRI with GBCA(Gadolinium Based Contrast Agent), Also show the changing patterns depending on diverse MRI sequences which are made by different physical principle. For this study, we made MRI phantom ourselves. We mixed 500 mmol Gadoteridol with Saline in each 28 different containers from 500 to 0 mmol. After that, MR phantom was scanned by physically different MRI sequences which are T1 SE, T2 FLAIR, T1 FLAIR, 3D FLASH, T1 3D SPACE and 3D SPCIR in 1.5T bore. The results were as follows : *T1 Spin echo's Total SI(Signal Intensity) was 15608.7, Max peak was 1352.6 in 1 mmol. *T2 FLAIR's Total SI was 9106.4, Max peak was 0.4 1721.6 in 1 mmol. *T1 FLAIR's Total SI was 20972.5, Max peak was 1604.9 in 1 mmol. *3D FLASH's Total SI was 20924.0, Max peak was 1425.7 in 40 mmol. *3D SPACE 1mm's Total SI was 6399.0, Max peak was 528.3 in 3 mmol. *3D SPACE 5mm's Total SI was 6276.5, Max peak was 514.6 in 2 mmol. *3D SPCIR's Total SI was 1778.8, Max peak was 383.8 in 0.4 mmol. In most sequences, High signal intensity was shown in diluted lower concentration rather than high concentration, And also graph's max peak and pattern had difference value according to the each different sequence. Through this paper which have quantitative result of GBCA's reaction rate depending on sequence, We expect that practical enhanced MR protocol can be performed in clinical field.

Time-lapse crosswell seismic tomography for monitoring injected $CO_2$ in an onshore aquifer, Nagaoka, Japan (일본 Nagaoka의 육상 대수층에 주입된 $CO_2$의 관찰을 위한 시간차 시추공간 탄성파 토모그래피)

  • Saito, Hideki;Nobuoka, Dai;Azuma, Hiroyuki;Xue, Ziqiu;Tanase, Daiji
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.1
    • /
    • pp.30-36
    • /
    • 2006
  • Japan's first pilot-scale $CO_2$ sequestration experiment has been conducted in Nagaoka, where 10400 t of $CO_2$ have been injected in an onshore aquifer at a depth of about 1100 m. Among various measurements conducted at the site for monitoring the injected $CO_2$, we conducted time-lapse crosswell seismic tomography between two observation wells to determine the distribution of $CO_2$ in the aquifer by the change of P-wave velocities. This paper reports the results of the crosswell seismic tomography conducted at the site. The crosswell seismic tomography measurements were carried out three times; once before the injection as a baseline survey, and twice during the injection as monitoring surveys. The velocity tomograms resulting from the monitoring surveys were compared to the baseline survey tomogram, and velocity difference tomograms were generated. The velocity difference tomograms showed that velocity had decreased in a part of the aquifer around the injection well, where the injected $CO_2$ was supposed to be distributed. We also found that the area in which velocity had decreased was expanding in the formation up-dip direction, as increasing amounts of $CO_2$ were injected. The maximum velocity reductions observed were 3.0% after 3200 t of $CO_2$ had been injected, and 3.5% after injection of 6200 t of $CO_2$. Although seismic tomography could map the area of velocity decrease due to $CO_2$ injection, we observed some contradictions with the results of time-lapse sonic logging, and with the geological condition of the cap rock. To investigate these contradictions, we conducted numerical experiments simulating the test site. As a result, we found that part of the velocity distribution displayed in the tomograms was affected by artefacts or ghosts caused by the source-receiver geometry for the crosswell tomography in this particular site. The maximum velocity decrease obtained by tomography (3.5%) was much smaller than that observed by sonic logging (more than 20%). The numerical experiment results showed that only 5.5% velocity reduction might be observed, although the model was given a 20% velocity reduction zone. Judging from this result, the actual velocity reduction can be more than 3.5%, the value we obtained from the field data reconstruction. Further studies are needed to obtain more accurate velocity values that are comparable to those obtained by sonic logging.

Correlation among Ownership of Home Appliances Using Multivariate Probit Model (다변량 프로빗 모형을 이용한 가전제품 구매의 상관관계 분석)

  • Kim, Chang-Seob;Shin, Jung-Woo;Lee, Mi-Suk;Lee, Jong-Su
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.2
    • /
    • pp.17-26
    • /
    • 2009
  • As the lifestyle of consumers changes and the need for various products increases, new products are being developed in the market. Each household owns various home appliances which are purchased through the choice of a decision maker. These appliances include not only large-sized products such as TV, refrigerator, and washing machine, but also small-sized products such as microwave oven and air cleaner. There exists latent correlation among possession of home appliances, even though they are purchased independently. The purpose of this research is to analyze the effect of demographic factors on the purchase and possession of each home appliances, and to derive some relationships among various appliances. To achieve this purpose, the present status on the possession of each home appliances are investigated through consumer survey data on the electric and energy product. And a multivariate probit(MVP) model is applied for the empirical analysis. From the estimation results, some appliances show a substitutive or complementary pattern as expected, while others which look apparently unrelated have correlation by co-incidence. This research has several advantages compared to previous literatures on home appliances. First, this research focuses on the various products which are purchased by each household, while previous researches such as Matsukawa and Ito(1998) and Yoon(2007) focus just on a particular product. Second, the methodology of this research can consider a choice process of each product and correlation among products simultaneously. Lastly, this research can analyze not only a substitutive or complementary relationship in the same category, but also the correlation among products in the different categories. As the data on the possession of home appliances in each household has a characteristic of multiple choice, not a single choice, a MVP model are used for the empirical analysis. A MVP model is derived from a random utility model, and has an advantage compared to a multinomial logit model in that correlation among error terms can be derive(Manchanda et al., 1999; Edwards and Allenby, 2003). It is assumed that the error term has a normal distribution with zero mean and variance-covariance matrix ${\Omega}$. Hence, the sign and value of correlation coefficients means the relationship between two alternatives(Manchanda et al., 1999). This research uses the data of 'TEMEP Household ICT/Energy Survey (THIES) 2008' which is conducted by Technology Management, Economics and Policy Program in Seoul National University. The empirical analysis of this research is accomplished in two steps. First, a MVP model with demographic variables is estimated to analyze the effect of the characteristics of household on the purchase of each home appliances. In this research, some variables such as education level, region, size of family, average income, type of house are considered. Second, a MVP model excluding demographic variables is estimated to analyze the correlation among each home appliances. According to the estimation results of variance-covariance matrix, each households tend to own some appliances such as washing machine-refrigerator-cleaner-microwave oven, and air conditioner-dish washer-washing machine and so on. On the other hand, several products such as analog braun tube TV-digital braun tube TV and desktop PC-portable PC show a substitutive pattern. Lastly, the correlation map of home appliances are derived using multi-dimensional scaling(MDS) method based on the result of variance-covariance matrix. This research can provide significant implications for the firm's marketing strategies such as bundling, pricing, display and so on. In addition, this research can provide significant information for the development of convergence products and related technologies. A convergence product can decrease its market uncertainty, if two products which consumers tend to purchase together are integrated into it. The results of this research are more meaningful because it is based on the possession status of each household through the survey data.

  • PDF