• Title/Summary/Keyword: D2D systems

Search Result 4,662, Processing Time 0.043 seconds

A Study of Altered IL-6 and TNF-α Expression in Peritoneal Fluid of Patients with Endometriosis (자궁내막증 환자의 복강 액내 IL-6와 TNF-α의 변화 양상에 관한 연구)

  • Kang, Jeong-Bae;Lee, Young-Kyeong
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.33 no.1
    • /
    • pp.45-52
    • /
    • 2006
  • Objective: Our purpose was to investigate the relationship between the levels of IL-6 and tumor necrosis factor-${\alpha}$ in the peritoneal fluid of women with and without endometriosis and infertile women. Methods: This study is prospective and case-control study in University hospital, enrolled thirty-four women with laparoscopic findings of minimal to severe endometriosis, and thirty-seven women with no visual evidence of pelvic endometriosis and with benign gynecologic disease. IL-6 and tumor necrosis factor-${\alpha}$ levels in peritoneal fluid were determined using commercial ELISA. IL-6 and tumor necrosis factor-${\alpha}$ concentrations were compared among women with and without endometriosis, and with infertile and fertile women, and then also compared according the revised American Fertility Society classification. Results: IL-6 and tumor necrosis factor-${\alpha}$ concentrations were higher than in the peritoneal fluid of women with endometriosis than in matched normal controls. Cyclic variations in IL-6 concentrations were seen in peritoneal fluid from patients with endometriosis: the concentrations in the secretory phase were significantly higher than those in the proliferative phase. The concentrations were higher than among of infertile women than in fertile women. A significant correlation between IL-6 and tumor necrosis factor-${\alpha}$ concentrations and endometriosis stage III and IV was noted. Conclusion: Increased levels of IL-6 and tumor necrosis factor-${\alpha}$ in patients with endometriosis in the peritoneal fluid may be relate to the pathogenesis of endometriosis suggesting that partially contribute to the disturbed immune regulation observed in patients with endometriosis.

Effectiveness and characteristics of technology transfer consortia in public R&D sector: The case of Korean TT consortia (공공연구부문에서의 기술이전컨소시엄의 효과와 특성 연구: 공공기술이전컨소시엄 사례를 중심으로)

  • Park, Jong-Bok;Ryu, Tae-Kyu
    • Journal of Korea Technology Innovation Society
    • /
    • v.10 no.2
    • /
    • pp.284-309
    • /
    • 2007
  • Technology transfer (TT) consortium is an affiliation of two or more public research institutions (PRIs) that participate in a common technology transfer activity or pool their resources together, with the objective of facilitating technology transfer. Based on empirical analysis of five regional TT consortia (2002-2006) operating in Korea, this paper suggests their effectiveness by employing a TT performance index (TTPI) and identifies possible characteristics involved, such as motivations, facilitators, barriers, and challenges. TTPI devised in the paper is a new composite TT performance index to measure how much the TT performance of a PH changed in a designated year compared to a base year. All the performance indicators of TTPI are well-structured based on the unique TT process that is prevalent in Korea. Further, TTPI can bring different size and focus of PRIs to the same scale for comparison by double-normalizing. The paper tests the effectiveness of TT consortium for the escalation of TT performances in member PRIs by highlighting the differences of TTPI's between 2005 and 2001. As a result, the paper found that the escalation of TTPI for member PRIs was greater than that for non-member PRIs. As for the characteristics of TT consortia, their respective factors obtained by TT expert survey were computed with proportion tests of differences (Z tests) to compare two perspectives between intramural and extramural groups. One of key findings is that there is general homogeneity in stakeholder perspectives regarding motivations, facilitators, barriers, and challenges. Some notable responses are as follow; the most probable motivation to join TT consortium is to share or exchange TT competences for enhanced performance. Second, the most probable facilitator is professional capability of consortium-hired personnel. Third, the foremost probable barriers to effective TT consortium are frequent change of consortium director and passive participation of member PRIs. Lastly, both publicizing TT consortia and developing performance metrics are the most important for the improvement of TT consortia. The understanding of the characteristics of TT consortia increases the likelihood of accelerated success, because TT consortia path from formation to termination encompasses many concepts, processes, principles, and factors. Finally, an analysis of the survey data combined with expert interview and observation data led the authors to derive five conditions as being critical to viable TT consortia in Korea at early stage of technology transfer systems. These conditions include policy infrastructure, proactive participation, excellent professionals, personal motivation, and teaming mechanisms. It is expected that the Korean evidence is a starting point to develop and refine the theory of TT consortia and for additional studies in other countries.

  • PDF

Influences of Firm Characteristics and the Host Country Environment on the Degree of Foreign Market Involvement (기업특성과 호스트국가 환경이 해외시장 관여도에 미치는 영향에 관한 연구)

  • Maktoba, Omar;Nwankwo, Sonny
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.2
    • /
    • pp.5-16
    • /
    • 2009
  • Against the backdrop of the increasing trend towards economic globalisation, many international firms are indicating that decisions on how to enter foreign markets remains one of the key strategic challenges confronting them. Despite the rich body of literature on the topic, the fact that these challenges have continued to dominate global marketing strategy discourses point to someevident lacunae. Accordingly, this paper considers the variables, categorised in terms of firm contexts (standardisation, market research, competition, structure, competitive advantage) and host country-contexts (economic development, cultural differences, regulation and political risk), which influence the degree of involvement of UK companies in overseas markets. Following hypotheses were drawn from literature review: H1: The greater the level of competition, the higher the degree of involvement in the overseas market. H2: The more centralised the firm's organisation structure, the higher the degree of involvement in the overseas market. H3a: The adoption of a low cost-approach to competitive advantage will lead to a higher degree of involvement. H3b: The adoption of an innovation-approach to competitive advantage will lead to a higher degree of involvement. H3c: The adoption of a market research approach to competitive advantages will lead to a higher degree of involvement. H3d: The adoption of a breadth of strategic target-approach to competitive advantage will lead to a lower degree of involvement. H4: The higher the degree of standardisation of the international marketing mix the higher the degree of involvement. H5: The greater the degree of economic development in the host market, the higher the degree of involvement. H6: The greater the cultural differences between home and host countries, the lower the degree of involvement. H7: The greater the difference in regulations between the home country and the host country, the lower the degree of involvement. H8: The higher the political risk in the host country, the lower the degree of involvement. A questionnaire instrument was constructed using, wherever possible, validated measures of the concepts to serve the aims of this study. Following two sets of mailings, 112 usable completed questionnaires were returned. Correlation analysis and multiple regression analysis were used to analyze data. Statistically, the paper suggests that factors relating to the level of competition, competitive advantages and economic development are strong in influencing foreign market involvements. On the other hand, unexpectedly, cultural factors (especially individualism/collectivism and low and high power distance dimensions) proved to have weak moderating effects. The reason for this, in part, is due to the pervading forces of globalisation and the attendant effect on global marketing. This paper has contributed to the general literature in a way that point to two mainimplications. First, with respect to research on national systems, the study may hold out some important lessons especially for developing nations. Most of these nations are known to be actively seeking to understand what it takes to attract foreign direct investment, expand domestic market and move their economies from the margin to the mainstream global economy. Second, it should be realised that competitive conditions remain in constant flux (even in mature industries and mature economies). This implies that a range of home country factors may be as important as host country factors in explaining firms' strategic moves and the degree of foreign market involvement. Further research can consider the impact of the home country environment on foreign market involvement decisions. Such an investigation will potentially provide further perspectives not only on the influence of national origin but also how home country effects are confounded with industry effects.

  • PDF

Impacts of Argo temperature in East Sea Regional Ocean Model with a 3D-Var Data Assimilation (동해 해양자료동화시스템에 대한 Argo 자료동화 민감도 분석)

  • KIM, SOYEON;JO, YOUNGSOON;KIM, YOUNG-HO;LIM, BYUNGHWAN;CHANG, PIL-HUN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.20 no.3
    • /
    • pp.119-130
    • /
    • 2015
  • Impacts of Argo temperature assimilation on the analysis fields in the East Sea is investigated by using DAESROM, the East Sea Regional Ocean Model with a 3-dimensional variational assimilation module (Kim et al., 2009). Namely, we produced analysis fields in 2009, in which temperature profiles, sea surface temperature (SST) and sea surface height (SSH) anomaly were assimilated (Exp. AllDa) and carried out additional experiment by withdrawing Argo temperature data (Exp. NoArgo). When comparing both experimental results using assimilated temperature profiles, Root Mean Square Error (RMSE) of the Exp. AllDa is generally lower than the Exp. NoArgo. In particular, the Argo impacts are large in the subsurface layer, showing the RMSE difference of about $0.5^{\circ}C$. Based on the observations of 14 surface drifters, Argo impacts on the current and temperature fields in the surface layer are investigated. In general, surface currents along the drifter positions are improved in the Exp. AllDa, and large RMSE differences (about 2.0~6.0 cm/s) between both experiments are found in drifters which observed longer period in the southern region where Argo density was high. On the other hand, Argo impacts on the SST fields are negligible, and it is considered that SST assimilation with 1-day interval has dominant effects. Similar to the difference of surface current fields between both experiments, SSH fields also reveal significant difference in the southern East Sea, for example the southwestern Yamato Basin where anticyclonic circulation develops. The comparison of SSH fields implies that SSH assimilation does not correct the SSH difference caused by withdrawing Argo data. Thus Argo assimilation has an important role to reproduce meso-scale circulation features in the East Sea.

Effects of Iterative Reconstruction Algorithm, Automatic Exposure Control on Image Quality, and Radiation Dose: Phantom Experiments with Coronary CT Angiography Protocols (반복적 재구성 알고리즘과 관전류 자동 노출 조정 기법의 CT 영상 화질과 선량에 미치는 영향: 관상동맥 CT 조영 영상 프로토콜 기반의 팬텀 실험)

  • Ha, Seongmin;Jung, Sunghee;Chang, Hyuk-Jae;Park, Eun-Ah;Shim, Hackjoon
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.28-35
    • /
    • 2015
  • In this study, we investigated the effects of an iterative reconstruction algorithm and an automatic exposure control (AEC) technique on image quality and radiation dose through phantom experiments with coronary computed tomography (CT) angiography protocols. We scanned the AAPM CT performance phantom using 320 multi-detector-row CT. At the tube voltages of 80, 100, and 120 kVp, the scanning was repeated with two settings of the AEC technique, i.e., with the target standard deviations (SD) values of 33 (the higher tube current) and 44 (the lower tube current). The scanned projection data were reconstructed also in two ways, with the filtered back projection (FBP) and with the iterative reconstruction technique (AIDR-3D). The image quality was evaluated quantitatively with the noise standard deviation, modulation transfer function, and the contrast to noise ratio (CNR). More specifically, we analyzed the influences of selection of a tube voltage and a reconstruction algorithm on tube current modulation and consequently on radiation dose. Reduction of image noise by the iterative reconstruction algorithm compared with the FBP was revealed eminently, especially with the lower tube current protocols, i.e., it was decreased by 46% and 38%, when the AEC was established with the lower dose (the target SD=44) and the higher dose (the target SD=33), respectively. As a side effect of iterative reconstruction, the spatial resolution was decreased by a degree that could not mar the remarkable gains in terms of noise reduction. Consequently, if coronary CT angiogprahy is scanned and reconstructed using both the automatic exposure control and iterative reconstruction techniques, it is anticipated that, in comparison with a conventional acquisition method, image noise can be reduced significantly with slight decrease in spatial resolution, implying clinical advantages of radiation dose reduction, still being faithful to the ALARA principle.

A Study on the Strategy of IoT Industry Development in the 4th Industrial Revolution: Focusing on the direction of business model innovation (4차 산업혁명 시대의 사물인터넷 산업 발전전략에 관한 연구: 기업측면의 비즈니스 모델혁신 방향을 중심으로)

  • Joeng, Min Eui;Yu, Song-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.57-75
    • /
    • 2019
  • In this paper, we conducted a study focusing on the innovation direction of the documentary model on the Internet of Things industry, which is the most actively industrialized among the core technologies of the 4th Industrial Revolution. Policy, economic, social, and technical issues were derived using PEST analysis for global trend analysis. It also presented future prospects for the Internet of Things industry of ICT-related global research institutes such as Gartner and International Data Corporation. Global research institutes predicted that competition in network technologies will be an issue for industrial Internet (IIoST) and IoT (Internet of Things) based on infrastructure and platforms. As a result of the PEST analysis, developed countries are pushing policies to respond to the fourth industrial revolution through cooperation of private (business/ research institutes) led by the government. It was also in the process of expanding related R&D budgets and establishing related policies in South Korea. On the economic side, the growth tax of the related industries (based on the aggregate value of the market) and the performance of the entity were reviewed. The growth of industries related to the fourth industrial revolution in advanced countries overseas was found to be faster than other industries, while in Korea, the growth of the "technical hardware and equipment" and "communication service" sectors was relatively low among industries related to the fourth industrial revolution. On the social side, it is expected to cause enormous ripple effects across society, largely due to changes in technology and industrial structure, changes in employment structure, changes in job volume, etc. On the technical side, changes were taking place in each industry, representing the health and medical sectors and manufacturing sectors, which were rapidly changing as they merged with the technology of the Fourth Industrial Revolution. In this paper, various management methodologies for innovation of existing business model were reviewed to cope with rapidly changing industrial environment due to the fourth industrial revolution. In addition, four criteria were established to select a management model to cope with the new business environment: 'Applicability', 'Agility', 'Diversity' and 'Connectivity'. The expert survey results in an AHP analysis showing that Business Model Canvas is best suited for business model innovation methodology. The results showed very high importance, 42.5 percent in terms of "Applicability", 48.1 percent in terms of "Agility", 47.6 percent in terms of "diversity" and 42.9 percent in terms of "connectivity." Thus, it was selected as a model that could be diversely applied according to the industrial ecology and paradigm shift. Business Model Canvas is a relatively recent management strategy that identifies the value of a business model through a nine-block approach as a methodology for business model innovation. It identifies the value of a business model through nine block approaches and covers the four key areas of business: customer, order, infrastructure, and business feasibility analysis. In the paper, the expansion and application direction of the nine blocks were presented from the perspective of the IoT company (ICT). In conclusion, the discussion of which Business Model Canvas models will be applied in the ICT convergence industry is described. Based on the nine blocks, if appropriate applications are carried out to suit the characteristics of the target company, various applications are possible, such as integration and removal of five blocks, seven blocks and so on, and segmentation of blocks that fit the characteristics. Future research needs to develop customized business innovation methodologies for Internet of Things companies, or those that are performing Internet-based services. In addition, in this study, the Business Model Canvas model was derived from expert opinion as a useful tool for innovation. For the expansion and demonstration of the research, a study on the usability of presenting detailed implementation strategies, such as various model application cases and application models for actual companies, is needed.

Application of spatiotemporal transformer model to improve prediction performance of particulate matter concentration (미세먼지 예측 성능 개선을 위한 시공간 트랜스포머 모델의 적용)

  • Kim, Youngkwang;Kim, Bokju;Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.329-352
    • /
    • 2022
  • It is reported that particulate matter(PM) penetrates the lungs and blood vessels and causes various heart diseases and respiratory diseases such as lung cancer. The subway is a means of transportation used by an average of 10 million people a day, and although it is important to create a clean and comfortable environment, the level of particulate matter pollution is shown to be high. It is because the subways run through an underground tunnel and the particulate matter trapped in the tunnel moves to the underground station due to the train wind. The Ministry of Environment and the Seoul Metropolitan Government are making various efforts to reduce PM concentration by establishing measures to improve air quality at underground stations. The smart air quality management system is a system that manages air quality in advance by collecting air quality data, analyzing and predicting the PM concentration. The prediction model of the PM concentration is an important component of this system. Various studies on time series data prediction are being conducted, but in relation to the PM prediction in subway stations, it is limited to statistical or recurrent neural network-based deep learning model researches. Therefore, in this study, we propose four transformer-based models including spatiotemporal transformers. As a result of performing PM concentration prediction experiments in the waiting rooms of subway stations in Seoul, it was confirmed that the performance of the transformer-based models was superior to that of the existing ARIMA, LSTM, and Seq2Seq models. Among the transformer-based models, the performance of the spatiotemporal transformers was the best. The smart air quality management system operated through data-based prediction becomes more effective and energy efficient as the accuracy of PM prediction improves. The results of this study are expected to contribute to the efficient operation of the smart air quality management system.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

A Study on the Buyer's Decision Making Models for Introducing Intelligent Online Handmade Services (지능형 온라인 핸드메이드 서비스 도입을 위한 구매자 의사결정모형에 관한 연구)

  • Park, Jong-Won;Yang, Sung-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.119-138
    • /
    • 2016
  • Since the Industrial Revolution, which made the mass production and mass distribution of standardized goods possible, machine-made (manufactured) products have accounted for the majority of the market. However, in recent years, the phenomenon of purchasing even more expensive handmade products has become a noticeable trend as consumers have started to acknowledge the value of handmade products, such as the craftsman's commitment, belief in their quality and scarcity, and the sense of self-esteem from having them,. Consumer interest in these handmade products has shown explosive growth and has been coupled with the recent development of three-dimensional (3D) printing technologies. Etsy.com is the world's largest online handmade platform. It is no different from any other online platform; it provides an online market where buyers and sellers virtually meet to share information and transact business. However, Etsy.com is different in that shops within this platform only deal with handmade products in a variety of categories, ranging from jewelry to toys. Since its establishment in 2005, despite being limited to handmade products, Etsy.com has enjoyed rapid growth in membership, transaction volume, and revenue. Most recently in April 2015, it raised funds through an initial public offering (IPO) of more than 1.8 billion USD, which demonstrates the huge potential of online handmade platforms. After the success of Etsy.com, various types of online handmade platforms such as Handmade at Amazon, ArtFire, DaWanda, and Craft is ART have emerged and are now competing with each other, at the same time, which has increased the size of the market. According to Deloitte's 2015 holiday survey on which types of gifts the respondents plan to buy during the holiday season, about 16% of U.S. consumers chose "homemade or craft items (e.g., Etsy purchase)," which was the same rate as those for the computer game and shoes categories. This indicates that consumer interests in online handmade platforms will continue to rise in the future. However, this high interest in the market for handmade products and their platforms has not yet led to academic research. Most extant studies have only focused on machine-made products and intelligent services for them. This indicates a lack of studies on handmade products and their intelligent services on virtual platforms. Therefore, this study used signaling theory and prior research on the effects of sellers' characteristics on their performance (e.g., total sales and price premiums) in the buyer-seller relationship to identify the key influencing e-Image factors (e.g., reputation, size, information sharing, and length of relationship). Then, their impacts on the performance of shops within the online handmade platform were empirically examined; the dataset was collected from Etsy.com through the application of web harvesting technology. The results from the structural equation modeling revealed that the reputation, size, and information sharing have significant effects on the total sales, while the reputation and length of relationship influence price premiums. This study extended the online platform research into online handmade platform research by identifying key influencing e-Image factors on within-platform shop's total sales and price premiums based on signaling theory and then performed a statistical investigation. These findings are expected to be a stepping stone for future studies on intelligent online handmade services as well as handmade products themselves. Furthermore, the findings of the study provide online handmade platform operators with practical guidelines on how to implement intelligent online handmade services. They should also help shop managers build their marketing strategies in a more specific and effective manner by suggesting key influencing e-Image factors. The results of this study should contribute to the vitalization of intelligent online handmade services by providing clues on how to maximize within-platform shops' total sales and price premiums.

Comparison Study of Water Tension and Content Characteristics in Differently Textured Soils under Automatic Drip Irrigation (자동점적관수에 의한 토성별 수분함량 및 장력 변화특성 비교 연구)

  • Kim, Hak-Jin;Ahn, Sung-Wuk;Han, Kyung-Hwa;Choi, Jin-Yong;Chung, Sun-Ok;Roh, Mi-Young;Hur, Seung-Oh
    • Journal of Bio-Environment Control
    • /
    • v.22 no.4
    • /
    • pp.341-348
    • /
    • 2013
  • Maintenance of adequate soil tension or content during the period of crop growth is necessary to support optimum plant growth and yields. A better understanding of soil tension and content for precision irrigation would allow optimal soil water condition to crops and minimize the adverse effects of water stress on crop growth and development. This research reports on a comparison of soil water tension and content variations in differently textured soils over time under drip irrigation using two different water management methods, i.e. pulse time and required water irrigation methods. The pulse time-based irrigation was performed by turning the solenoid valve on and off for preset times to allow the wetting front to disperse in root zone before additional water was applied. The required water estimation method was a new water control logic designed by Rural Development Administration that applies the amount of water required based on a conversion of the measured water tension into water content. The use of the pulse time irrigation method under drip irrigation at a high tension of -20 kPa and high temperatures over $30^{\circ}C$ was not successful at maintaining moisture tensions within an appropriate range of 5 kPa because the preset irrigation times used for water control could not compensate for the change in evapotranspiration during day and night. The response time and pattern of water contents for all of the tested soils measured with capacitance-based sensor probes were faster and more direct than those of water tensions measured with porous and ceramic cup-based tensiometers when water was applied, indicating water content would be a better control variable for automatic irrigation. The required water estimation-based irrigation method provided relatively stable control of moisture tension, even though somewhat lower tension values were obtained as compared to the target tension of -20 kPa, indicating that growers could expect to be effective in controlling low tensions ranging from -10 to -20 kPa with the required water estimation system.