• Title/Summary/Keyword: Construction Efficiency

Search Result 3,496, Processing Time 0.05 seconds

Construction of Retrovirus Vector System for the Regulation of Recombinant hTPO Gene Expression (재조합 hTPO 유전자의 발현 조절을 위한 Retrovirus Vector System의 구축)

  • Kwon, Mo-Sun;Koo, Bon-Chul;Kim, Do-Hyang;Kim, Te-Oan
    • Reproductive and Developmental Biology
    • /
    • v.31 no.3
    • /
    • pp.161-167
    • /
    • 2007
  • In this study, we constructed and tested retrovirus vectors designed to express the human thrombopoietin gene under the control of the tetracycline-inducible promoters. To increase the hTPO gene expression at him-on state, WPRE sequence was also introduced into retrovirus vector at downstream region of either the hTPO gene or the sequence encoding reverse tetracycline-controlled transactivator (rtTA). Primary culture cells (PFF, porcine fetal fibroblast; CEF, chicken embryonic fibroblast) infected with the recombinant retrovirus were cultured in the medium supplemented with or without doxycycline for 48hr, and induction efficiency was measured by comparing the hTPO gene expression level using RT-PCR, western blot and ELISA. Higher hPTO expression and tighter expression control were observed from the vector in which the WPRE sequence was placed at downstream of the hTPO (in CEF) or rtTA(in PFF) gene. This resulting tetracycline inducible vector system may be helpful in solving serious physiological disturbance problems which have been a major obstacle in successful production of transgenic animals.

A Study on the Effects of BIM Adoption and Methods of Implementationin Landscape Architecture through an Analysis of Overseas Cases (해외사례 분석을 통한 조경분야에서의 BIM 도입효과 및 실행방법에 관한 연구)

  • Kim, Bok-Young;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.45 no.1
    • /
    • pp.52-62
    • /
    • 2017
  • Overseas landscape practices have already benefited from the awareness of BIM while landscape-related organizations are encouraging its use and the number of landscape projects using BIM is increasing. However, since BIM has not yet been introduced in the domestic field, this study investigated and analyzed overseas landscape projects and discussed the positive effects and implementation of BIM. For this purpose, landscape projects were selected to show three effects of BIM: improvement of design work efficiency, building of a platform for cooperation, and performance of topography design. These three projects were analyzed across four aspects of implementation methods: landscape information, 3D modeling, interoperability, and visualization uses of BIM. First, in terms of landscape information, a variety of building information was constructed in the form of 3D libraries or 2D CAD format from detailed landscape elements to infrastructure. Second, for 3D modeling, a landscape space including simple terrain and trees was modeled with Revit while elaborate and complex terrain was modeled with Maya, a professional 3D modeling tool. One integrated model was produced by periodically exchanging, reviewing, and finally combining each model from interdisciplinary fields. Third, interoperability of data from different fields was achieved through the unification of file formats, conversion of differing formats, or compliance with information standards. Lastly, visualized 3D models helped coordination among project partners, approval of design, and promotion through public media. Reviewing of the case studies shows that BIM functions as a process to improve work efficiency and interdisciplinary collaboration, rather than simply as a design tool. It has also verified that landscape architects could play an important role in integrated projects using BIM. Just as the introduction of BIM into the architecture, engineering and construction industries saw great benefits and opportunities, BIM should also be introduced to landscape architecture.

Lessons from Cross-Scale Studies of Water and Carbon Cycles in the Gwangneung Forest Catchment in a Complex Landscape of Monsoon Korea (몬순기후와 복잡지형의 특성을 갖는 광릉 산림유역의 물과 탄소순환에 대한 교차규모 연구로부터의 교훈)

  • Lee, Dong-Ho;Kim, Joon;Kim, Su-Jin;Moon, Sang-Ki;Lee, Jae-Seok;Lim, Jong-Hwan;Son, Yow-Han;Kang, Sin-Kyu;Kim, Sang-Hyun;Kim, Kyong-Ha;Woo, Nam-Chil;Lee, Bu-Yong;Kim, Sung
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.9 no.2
    • /
    • pp.149-160
    • /
    • 2007
  • KoFlux Gwangneung Supersite comprises complex topography and diverse vegetation types (and structures), which necessitate complementary multi-disciplinary measurements to understand energy and matter exchange. Here, we report the results of this ongoing research with special focuses on carbon/water budgets in Gwangneung forest, implications of inter-dependency between water and carbon cycles, and the importance of hydrology in carbon cycling under monsoon climate. Comprehensive biometric and chamber measurements indicated the mean annual net ecosystem productivity (NEP) of this forest to be ${\sim}2.6\;t\;C\;ha^{-1}y^{-1}$. In conjunction with the tower flux measurement, the preliminary carbon budget suggests the Gwangneung forest to be an important sink for atmospheric $CO_2$. The catchment scale water budget indicated that $30\sim40%$ of annual precipitation was apportioned to evapotranspiration (ET). The growing season average of the water use efficiency (WUE), determined from leaf carbon isotope ratios of representative tree species, was about $12{\mu}mol\;CO_2/mmol\;H_2O$ with noticeable seasonal variations. Such information on ET and WUE can be used to constrain the catchment scale carbon uptake. Inter-annual variations in tree ring growth and soil respiration rates correlated with the magnitude and the pattern of precipitation during the growing season, which requires further investigation of the effect of a monsoon climate on the catchment carbon cycle. Additionally, we examine whether structural and functional units exist in this catchment by characterizing the spatial heterogeneity of the study site, which will provide the linkage between different spatial and temporal scale measurements.

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

Analysis of Fish Utilization and Effectiveness of Fishways Installed at Weirs in Large Rivers (대하천 보에 설치된 어도의 어류 이용 현황 및 효과 분석)

  • Jeong-Hui Kim;Sang-Hyeon Park;Seung-Ho Baek;Namjoo Lee;Min-Ho Jang;Ju-Duk Yoon
    • Korean Journal of Ecology and Environment
    • /
    • v.56 no.4
    • /
    • pp.348-362
    • /
    • 2023
  • This study analyzed the monitoring results of fishways at 16 weirs constructed on four large Rivers to provide data helpful for the operation and management of fishways. The average utilization rate of the fishways at the weirs was confirmed to be 64.9%. When comparing the dominant species in the mainstream and fishway monitoring results, differences were observed in 9 weirs (56.3%). This indicated that the species prevalent in the mainstream were not necessarily the ones most frequently using the fishways. The average number of individuals using the fishways per day was 336. When classifying the fish species using the fishway by life type, 92.3% were primary freshwater fish, and migratory species accounted for only 5.6%. Analysis based on the season of fishway usage revealed that an average or higher number of fish species used the fishways from May to October, with the highest number of individual users occurring from June to August. Between May and July, 80% of the fish species using the fishways were in their spawning period, while during other season, less than 40% were species that move during the spawning period. The fishways that showed a significant alignment between the spawning period and the fishway passage period were Rhinogobius brunneus, Leiocassis nitidus, Squalidus chankaensis tsuchigae, Pseudogobio esocinus, Acheilognathus rhombeus, and Pungtungia herzi, in that order. When comparing the fishway monitoring results of the Gangjeong-Goryeong Weir and the Dalseong Weir with the upper part water level of the weir, both the number of fish species and individuals using the fishway showed positive correlations with the upper part water level of the weir. This suggests that a higher water level of the weir increases the inflow discharge within the fishway, leading to increased use by fish (number of individuals in Gangjeong-Goryeong Weir, P<0.001; number of species in Dalseong Weir, P<0.05). This study summarized and analyzed the results of fishway monitoring at 16 weirs built on four large Rivers, considering fishway efficiency, operation and management, monitoring period, and regulation of water level in the upper part of the weir. It is thought that this will help understand the status of fish use in fishways on large River and aid the construction, operation, and management of fishways in the future.

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Estimation of Optimal Size of the Treatment Facility for Nonpoint Source Pollution due to Watershed Development (비점오염원의 정량화방안에 따른 적정 설계용량결정)

  • Kim, Jin-Kwan
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.8 no.6
    • /
    • pp.149-153
    • /
    • 2008
  • The pollutant capacity occurred before and after the development of a watershed should be quantitatively estimated and controlled for the minimization of water contamination. The Ministry of Environment suggested a guideline for the legal management of nonpoint source from 2006. However, the rational method for the determination of treatment capacity from nonpoint source proposed in the guideline has the problem in the field application because it does not reflect the project based cases and overestimates the pollutant load to be reduced. So, we perform the standard rainfall analysis by analytical probabilistic method for the estimation of an additional pollutant load occurred by a project and suggest a methodology for the estimation of contaminant capacity instead of a simple rational method. The suggested methodology in this study could determine the reasonable capacity and efficiency of a treatment facility through the estimation of pollutant load from nonpoint source and from this we can manage the watershed appropriately. We applied a suggested methodology to the projects of housing land development and a dam construction in the watersheds. When we determine the treatment capacity by a rational method without consideration of the types of projects we should treat the 90% of pollutant capacity occurred by the development and to do so, about 30% of the total cost for the development should be invested for the treatment facility. This requires too big cost and is not realistic. If we use the suggested method the target pollutant capacity to be reduced will be 10 to 30% of the capacity occurred by the development and about 5 to 10% of the total cost can be used. The control of nonpoint source must be performed for the water resources management. However it is not possible to treat the 90% of pollutant load occurred by the development. The proper pollutant capacity from nonpoint source should be estimated and controlled based on various project types and in reality, this is very important for the watershed management. Therefore the results of this study might be more reasonable than the rational method proposed in the Ministry of Environment.

Optimum Design of Soil Nailing Excavation Wall System Using Genetic Algorithm and Neural Network Theory (유전자 알고리즘 및 인공신경망 이론을 이용한 쏘일네일링 굴착벽체 시스템의 최적설계)

  • 김홍택;황정순;박성원;유한규
    • Journal of the Korean Geotechnical Society
    • /
    • v.15 no.4
    • /
    • pp.113-132
    • /
    • 1999
  • Recently in Korea, application of the soil nailing is gradually extended to the sites of excavations and slopes having various ground conditions and field characteristics. Design of the soil nailing is generally carried out in two steps, The First step is to examine the minimum safety factor against a sliding of the reinforced nailed-soil mass based on the limit equilibrium approach, and the second step is to check the maximum displacement expected to occur at facing using the numerical analysis technique. However, design parameters related to the soil nailing system are so various that a reliable design method considering interrelationships between these design parameters is continuously necessary. Additionally, taking into account the anisotropic characteristics of in-situ grounds, disturbances in collecting the soil samples and errors in measurements, a systematic analysis of the field measurement data as well as a rational technique of the optimum design is required to improve with respect to economical efficiency. As a part of these purposes, in the present study, a procedure for the optimum design of a soil nailing excavation wall system is proposed. Focusing on a minimization of the expenses in construction, the optimum design procedure is formulated based on the genetic algorithm. Neural network theory is further adopted in predicting the maximum horizontal displacement at a shotcrete facing. Using the proposed procedure, various effects of relevant design parameters are also analyzed. Finally, an optimized design section is compared with the existing design section at the excavation site being constructed, in order to verify a validity of the proposed procedure.

  • PDF

Evaluation of the Nutrient Removal Performance of the Pilot-scale KNR (Kwon's Nutrient Removal) System with Dual Sludge for Small Sewage Treatment (소규모 하수처리를 위한 파일럿 규모 이중슬러지 KNR® (Kwon's nutrient removal) 시스템의 영얌염류 제거성능 평가)

  • An, Jin-Young;Kwon, Joong-Chun;Kim, Yun-Hak;Jeng, Yoo-Hoon;Kim, Doo-Eon;Ryu, Sun-Ho;Kim, Byung-Woo
    • Clean Technology
    • /
    • v.12 no.2
    • /
    • pp.67-77
    • /
    • 2006
  • A simple dual sludge process, called as $KNR^{(R)}$ (Kwon's Nutrient Removal) system, was developed for small sewage treatment. It is a hybrid system that consists of an UMBR (Upflow multi-layer bioreactor) as anaerobic and anoxic reactor with suspended denitrifier and a post aerobic biofilm reactor, filled with pellet-like media, with attached nitrifier. To evaluate the stability and performance of this system for small sewage treatment, the pilot-scale $KNR^{(R)}$ plant with a treatment capacity of $50m^3/d$ was practically applied to the actual sewage treatment plant, which was under retrofit construction during pilot plant operation, with a capacity of $50m^3/d$ in a small rural community. The HRTs of a UMBR and a post aerobic biofilm reactor were about 4.7 h and 7.2 h, respectively. The temperature in the reactor varied from $18.1^{\circ}C$ to $28.1^{\circ}C$. The pilot plant showed stable performance even though the pilot plant had been the severe fluctuation of influent flow rate and BOD/N ratio. During a whole period of this study, average concentrations of $COD_{cr}$, $COD_{Mn}$, $BOD_5$, TN, and TP in the final effluent obtained from this system were 11.0 mg/L, 8.8 mg/L, 4.2 mg/L, 3.5 mg/L, 9.8 mg/L, and 0.87/0.17 mg/L (with/without poly aluminium chloride(PAC)), which corresponded to a removal efficiency of 95.3%, 87.6%, 96.3%, 96.5%, 68.2%, and 55.4/90.3%, respectively. Excess sludge production rates were $0.026kg-DS/m^3$-sewage and 0.220 kg-DS/kg-BOD lower 1.9 to 3.8 times than those in activated sludge based system such as $A_2O$ and Bardenpho.

  • PDF