• Title/Summary/Keyword: Problem Solve

Search Result 11,904, Processing Time 0.038 seconds

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Coexistence Model in a Dynamic Platform with ICT-based Multi-Value Chains: focusing on Healthcare Service (ICT 기반 다중 가치사슬의 동적 플랫폼에서의 공존 모형: 의료서비스를 중심으로)

  • Lee, Hyun Jung;Chang, Yong Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.69-93
    • /
    • 2017
  • The development of ICT has leaded the diversification and changes of supplies and demands in markets. It also caused the creations of a variety of values which are differentiated from those in the existing market. Therefore, a new-type market is created, which can include multi-value chains which are from ICT-based created markets as well as the existing markets. We defined the platform as the new-type market. In the platform, the multi-value chains can be coexisted with multi-values. In true market, when a new-type value chain entered into an existing market, it is general that it can be conflicted with the existing value chain in the market. The conflicted problem among multi-value chains in a market is caused by the sharing of limited market resources like suppliers, consumers, services or products among the value chains. In other words, if there are multi-value chains in the platform, then it is possible to have conflictions, overlapping, creations or losses of values among the value chains. To solve the problem, we introduce coexistence factors to reduce the conflictions to reach market equilibrium in the platform. In the other hand, it is possible to lead the creations of differentiated values from the existing market and to augment the total market values in the platform. In the early era of ICT development, ICT was introduced for improvement of efficiency and effectiveness of the value chains in the existing market. However, according to the changed role of ICT from the supporter to the promotor of the market, ICT became to lead the variations of the value chains and creations of various values in the markets. For instance, Uber Taxi created a new value chain with ICT-based new-type service or products with new resources like new suppliers and consumers. When Uber and Traditional Taxi services are playing at the same time in Taxi service platform, it is possible to create values or make conflictions among values between the new and old value chains. In this research, like Uber and traditional taxi services, if there are conflictions among the multi-value chains, then it is necessary to minimize the conflictions in the platform for the coexistence of multi-value chains which can create the value-added values in the platform. So, it is important to predict and discuss the possible conflicted problems between new and old value chains. The confliction should be solved to reach market equilibrium with multi-value chains in the platform. That is, we discuss the possibility of the coexistence of multi-value chains in the platform which are comprised of a variety of suppliers and customers. To do this, especially we are focusing on the healthcare markets. Nowadays healthcare markets are popularized in global market as well as domestic. Therefore, there are a lot of and a variety of healthcare services like Traditional-, Tele-, or Intelligent- healthcare services and so on. It shows that there are multi-suppliers, -consumers and -services as components of each different value chain in the same platform. The platform can be shared by different values that are created or overlapped by confliction and loss of values in the value chains. In this research, as was said, we focused on the healthcare services to show if a platform can be shared by different value chains like traditional-, tele-healthcare and intelligent-healthcare services and products. Additionally, we try to show if it is possible to increase the value of each value chain as well as the total value of the platform. As the result, it is possible to increase of each value of each value chain as well as the total value in the platform. Finally, we propose a coexistence model to overcome such problems and showed the possibility of coexistence between the value chains through experimentation.

A Study on Maternity Aids Utilization in the Maternal and Child Health and Family Planning (농촌(農村)에 있어서 분만개조요원(分娩介助要員)의 봉사(奉仕)에 의(依)한 모자보건(母子保健)rhk 가족계획(家族計劃)에 관(關) 연구(硏究))

  • Yeh, Min-Hae;Lee, Sung Kwan
    • Journal of Preventive Medicine and Public Health
    • /
    • v.5 no.1
    • /
    • pp.57-95
    • /
    • 1972
  • This study was conducted to assess the effectiveness of service by maternity aids concerning maternal and child health in improving simultaneously infant mortality, contraception and vital registration among expectant mothers in rural Korea, where there is less apportunity for maternal and child health care. It is unrealistic to expect to solve this problem in rural Korea through professional persons considering the situation of medical facilities and the socioeconomic condition of residents. So, we intended to adopt a system of services by maternity aids who were educated formally among indigenous women. After the women were trained in maternal and child health, contraception, and registration for a short period, they were assigned as a maternity aids to each village to help with various activities concerning maternal and child health, for example, registration of pregnant women, home visiting to check for complications, supplying of delivery kits, attendance at delivery, persuasion of contraception, and invitation for registration and so on. Mean-while, four researchers called on the maternity aids to collect materials concerning vital events, maternal child health, contraception and registration, and to give further instruction and supervision as the program proceeded. A. Changes of women's attitude by services of maternity aid. Now, we examined to what extent' such a service system to expectant mothers affected a change in attitude of women residing in the study area as compared to women of the control area. 1) In the birth and death places, there were no changes between last and present infants, in study or control area. 2) In regard to attendants at delivery, there were no changes except for a small percentage of attendance (8%) by maternity aid in study area. But, I expect that more maternity sids could be used as attendants at delivery if they would be trained further and if there was more explanation to the residents about such a service. 3) Considering the rate of utilization of sterilized delivery kit, I am sure that more than 90 percent would be used if the delivery kit were supplied in the proper time. There were significant differences in rates between the study and the control areas. 4) Taking into consideration the utilization rate of the clinic for prenatal care and well baby care, if suck facilities were installed, it would probably be well utilized. 5) In the contraception, the rate of approval was as high as 89 percent in study area as compared to 82 percent in the control area. 6) Considering the rate of pre-and post-partum acceptance on contraception were as much as 70 percent or more, if motivation to use contraception was given to them adequately, the government could reach the goals for family planning as planned. 7) In the vital registration, the rate of birth registration in the study area was some what improved compared to that of the control area, while the rate of death registration was not changed at all. Taking into account the fact that the rate of confirmation of vital events by maternity aids was remarkably high, if the registration system changed to a 'notification' system instead of formal registration ststem, it would be improved significantly compared to present system. B. Effect of the project Thus, with changes in the residents' attitude, was there a reduction in the infant death rate? 1) It is very difficult problem to compare the mortality of infants between last and present infants, because many women don't want to answer accurately about their dead children especially the infants that died within a few days after birth. In this study the data of present death comes from the maternity aides who followed up every pregnancy they had recorded to see what had happened. They seem to have very reliable information on what happened in first few weeks with follow up visitits to check out later changes. From these calculaton, when we compared the rate of infant death between last and present infant, there was remarkable reduction of death rate for present infant compare to that of last children, namely, the former was 30, while the latter 42. The figure is the lowest rate that I have ever heard. As the quality of data we could assess by comparing the causes of death. In the current death rate by communicable disease was much lower compare to the last child especially, tetanus cases and pneumonia. 2) Next, how many respondents used contraception after birth because of frequent contact with the maternity aid. In the registered cases, the respondents showed a tendency to practice contraception at an earlier age and with a small number of children. In a comparison of the rate of contraception between the study and the control area, the rate in the former was significantly higher than that of the latter. What is more, the proportion favoring smaller numbers of children and younger women rose in the study area as compared to the control area. 3) Regarding vital registration, though the rate of registration was gradually improved by efforts of maternity aid, it would be better to change the registration system. 4) In the crude birth rate, the rate in the study area was 22.2 while in the control area was 26.5. Natural increase rate showed 15.4 in the study area, while control area was 19.1. 5) In assessment of the efficiency of the maternity aids judging by the cost-effect viewpoint, the workers in the Medium area seemed to be more efficiency than those of other areas.

  • PDF

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Jeju Shinyang Fishing Port Remodeling Plan Utilizing Marine Tourism Resources (해양관광자원을 활용한 제주 신양항 리모델링 계획)

  • Kim, Yelim;Sung, Jong-Sang
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.44 no.2
    • /
    • pp.52-69
    • /
    • 2016
  • The fishing port was once the foothold of production as well as the stronghold of communities but with the declining of the fishing industry, ports became abandoned space. Jeju Special Self-Governing Province has continued its effort to vitalize marine tourism since 2010. Shinyang Port in particular is designated as a Prearranged Marina Port Development Zone, and planning for the Jeju Ocean Marina City project is underway. Nevertheless, fishing port remodeling projects implemented on Jeju so far have focused only on civil engineering such as renovating old facilities. In addition, most Marina Port Development Projects have been irrelevant to local communities. Leading projects by the local government mostly suffer from a lack of funding, which results in the renovation of old facilities and improper maintenance, while private sector investment projects do not lead to benefit sharing with the community. Shinyang Port, also renovated in 2008, ended up with outer breakwater extension construction that neither solved the fundamental problem of the site nor gave benefits to residents. To arrange a way to solve problems for civil engineering focused development project, improper maintenance, and benefit sharing with community, first, this study proposes a development plan that connects with the outlying areas near the ports. The plan reflects existing topography, Jeju traditional stonewalls, narrow paths on the master plan and programs by reading the regional context. In this way, this paper suggests a space development plan reflecting the local landscape and characteristic factors. Second, it satisfies various needs by using existing and new Marine Tourism Resources. Third, it examines sustainable operation and management measures through residents' participation. The proposal is significant in two key ways: it is a fresh attempt at connecting the fishing port with its outlying areas from a landscape perspective; and it considers environmental, social, economic issues, and suggests participation for local communities. Thus, the model can be used in future fishing-port remodeling plans for revitalizing unused space, including invaluable traditional landscapes, and for boosting the marine-leisure industry.

An evaluation of the adequacy of pont's index (Pont 지수의 임상적 적합성에 대한 평가)

  • Kim, Seong-Hun;Lee, Ki-Soo
    • The korean journal of orthodontics
    • /
    • v.30 no.1 s.78
    • /
    • pp.115-126
    • /
    • 2000
  • Dental arch expansion is one of the method used to solve the dental crowding problem by non-extraction. Many formulae using tooth size have been suggested to predict ideal inter-premolar and inter-molar width. The purpose of this study was to evaluate the adequacy of some upper dental arch width prediction methods, namely Pont's method, Schmuth's method and Cha's method. The sample consisted of the casts of 119 Korean young adults who had no muscular abnormality, no skeletal discrepancy, and Angle's Class I molar relationships. Measurements were obtained directly from plaster casts; they Included mesiodistal crown diameters of the four maxillary incisors, as well as maxillary inter-first-premolar and inter-first-molar arch widths as specified by Pont. The correlation coefficients between the sum of incisors(SI) and upper dental arch width were calculated. The differences between predicted width and actual width were classified as overestimated, properestimated, and underestimated. The data obtained from each group were analyzed for statistical differences. The results were as follows : 1. Upper dental arch width indices were calculated from SI in normal occlusion (81.96 : premolar index, 62.55 : molar index). 2. Low correlations between SI and arch width were noted in normal occlusion (0.50 in the inter-premolar width, 0.39 in the inter-molar width). 3. Pont's formula and Schmuth's formula tended to overestimate the inter-premolar width. A more even distribution of estimates was noted in Cha's fomula. 4. Cases within $\pm$1 mm range of observed inter-premolar width were $45\%$ in the Cha's formula, $40\%$ in the Pont's formula, and $39\%$ in the Schmuth's formula. 5. All formulae had a tendency to underestimate the inter-molar width, but Cha's formula had better predictability than others. 6. Cases within $\pm$1 mm range of observed inter-molar width were $40\%$ in the Cha's formula, $29\%$ in the Pont's formula, and $13\%$ of Schmuth's formula. The data presented in this study does not support the clinical usefulness of ideal arch width prediction methods using the mesiodistal width of maxillary incisors.

  • PDF

Rural Survey on Agricultural Mechanization Project - Rice Transplantation Operation - (농업기계화(農業機械化)에 관(關)한 연구(硏究) - 수도이앙작업(水稻移秧作業)의 기계화(機械化)를 중심(中心)으로 -)

  • Ahn, Su Bong;Kim, Soung Rai;Kim, Ki Dae
    • Korean Journal of Agricultural Science
    • /
    • v.8 no.2
    • /
    • pp.203-211
    • /
    • 1981
  • Mechanization of rice transplanting operation is very important project not only to solve the labor shortage problem at the so-called labor demand peak seasons of the rice transplanting, but also to reduce the production cost of rice by reducing the labor requirements. For these reasons this study was carried out to find the basic data for encourage the project of mechanization of rice transplanting. 381 sample farms were surveyed with questionaries and interviewed with a considerable number of relative personels about the operation, selecting and ownership trend of the rice transplanter. Collected data was analized by computer of Chungnam National University computer center applied to frequencies, cross- tabulation, $x^2$-test. The analized results of this survey are summarized as follows; 1. About 76.09% of the farmers interviewed was individual ownership of rice transplanter but about 52.27% of the farmers who wanted to purchase it in 2 or 3 years supported the cooperative ownership and utilization. This fact suggested that cooperative system of village level should be thoroghly studied. 2. The 93.33% of respondents gave the answer that the yield of rice was not affected by the planting methods between machine and manual. 3. The farmers who had a rice transplanter owned 4- row type rice transplanter with mat type seedling but the 25% of the farmers wanted to purchase it in 2 or 3 years wanted to own a 4 row type rice transplanter with band type seedling. Therefore the introduction of the 4-row type rice transplanter with band type seedling to rural area should be considered again. 4. The percent of farmers who wanted the cooperative system of village level was 49.57-57.83% of the farmers who had it already and wanted to own it in near future. It was strengthened by this fact that seedling nursely work was technically supported by the governmental level.

  • PDF

Setup of Infiltration Galleries and Preliminary Test for Estimating Its Effectiveness in Sangdae-ri Water Curtain Cultivation Area of Cheongju, Korea (청주 상대리 수막재배지의 지중 침투형 갤러리 설치와 예비 주입시험)

  • Moon, Sang-Ho;Kim, Yongcheol;Kim, Sung-Yun;Ki, Min-Gyu
    • Economic and Environmental Geology
    • /
    • v.49 no.6
    • /
    • pp.445-458
    • /
    • 2016
  • Most of water curtain cultivation (WCC) area in Korea has been inveterately suffering from the gradual draw-down of groundwater level and related shortage of water resources at the late stage of WCC peak time. To solve this problem, artificial recharge techniques has been recently applied to some WCC area. This study introduces infiltration gallery, which is one of the artificial recharge methods, and tentatively examined the effectiveness of three galleries installed at Sangdae-ri WCC area of Cheongju City. Seven galleries are set up at each empty space between eight vinyl houses in this area and its dimension is designed as 50 cm in each width and height and 300 cm in each length. Installation process was including bed excavation, backfill with gravels and silica sands, and completion of gallery by equipment of piezometer and covering with non-woven cloth. For each B, C, D gallery, 3 types of test including preliminary, four step and one long-term injection were performed. The first preliminary test showed the rough relations between injection rates and water level rise as follows; 20 cm and 30 cm level rise for $33.29{\sim}33.84m^3/d$ and $45.60{\sim}46.99m^3/d$ in B gallery; 0 cm, 16 cm and 33 cm level rise for $21.1m^3/d$, $33.98m^3/d$ and $41.69m^3/d$ in C gallery; 29 cm and 42 cm level rise for $48.10m^3/d$ and $52.23m^3/d$ in D gallery. Afterwards, more quantitative results estimating effectiveness of artificial recharge were reasoned out through stepped and long-term injection tests, which is expected to be employed for estimating water quantity re-injected into the aquifer through these galleries by natural injection over the period of WCC peak time.