• Title/Summary/Keyword: Input method

Search Result 10,260, Processing Time 0.047 seconds

The Effect of Structured Information on the Sleep Amount of Patients Undergoing Open Heart Surgery (계획된 간호 정보가 수면량에 미치는 영향에 관한 연구 -개심술 환자를 중심으로-)

  • 이소우
    • Journal of Korean Academy of Nursing
    • /
    • v.12 no.2
    • /
    • pp.1-26
    • /
    • 1982
  • The main purpose of this study was to test the effect of the structured information on the sleep amount of the patients undergoing open heart surgery. This study has specifically addressed to the Following two basic research questions: (1) Would the structed in formation influence in the reduction of sleep disturbance related to anxiety and Physical stress before and after the operation? and (2) that would be the effects of the structured information on the level of preoperative state anxiety, the hormonal change, and the degree of behavioral change in the patients undergoing an open heart surgery? A Quasi-experimental research was designed to answer these questions with one experimental group and one control group. Subjects in both groups were matched as closely as possible to avoid the effect of the differences inherent to the group characteristics, Baseline data were also. collected on both groups for 7 days prior to the experiment and found that subjects in both groups had comparable sleep patterns, trait anxiety, hormonal levels and behavioral level. A structured information as an experimental input was given to the subjects in the experimental group only. Data were collected and compared between the experimental group and the control group on the sleep amount of the consecutive pre and post operative days, on preoperative state anxiety level, and on hormonal and behavioral changes. To test the effectiveness of the structured information, two main hypotheses and three sub-hypotheses were formulated as follows; Main hypothesis 1: Experimental group which received structured information will have more sleep amount than control group without structured information in the night before the open heart surgery. Main hypothesis 2: Experimental group with structured information will have more sleep, amount than control group without structured information during the week following the open heart surgery Sub-hypothesis 1: Experimental group with structured information will be lower in the level of State anxiety than control group without structured information in the night before the open heart surgery. Sub-hypothesis 2 : Experimental group with structured information will have lower hormonal level than control group without stuctured information on the 5th day after the open heart surgery Sub-hypothesis 3: Experimental group with structured information will be lower in the behavioral change level than control group without structured information during the week after the open heart surgery. The research was conducted in a national university hospital in Seoul, Korea. The 53 Subjects who participated in the study were systematically divided into experimental group and control group which was decided by random sampling method. Among 53 subjects, 26 were placed in the experimental group and 27 in the control group. Instruments; (1) Structed information: Structured information as an independent variable was constructed by the researcher on the basis of Roy's adaptation model consisting of physiologic needs, self-concept, role function and interdependence needs as related to the sleep and of operational procedures. (2) Sleep amount measure: Sleep amount as main dependent variable was measured by trained nurses through observation on the basis of the established criteria, such as closed or open eyes, regular or irregular respiration, body movement, posture, responses to the light and question, facial expressions and self report after sleep. (3) State anxiety measure: State Anxiety as a sub-dependent variable was measured by Spi-elberger's STAI Anxiety scale, (4) Hormornal change measure: Hormone as a sub-dependent variable was measured by the cortisol level in plasma. (5) Behavior change measure: Behavior as a sub-dependent variable was measured by the Behavior and Mood Rating Scale by Wyatt. The data were collected over a period of four months, from June to October 1981, after the pretest period of two months. For the analysis of the data and test for the hypotheses, the t-test with mean differences and analysis of covariance was used. The result of the test for instruments show as follows: (1) STAI measurement for trait and state anxiety as analyzed by Cronbachs alpha coefficient analysis for item analysis and reliability showed the reliability level at r= .90 r= .91 respectively. (2) Behavior and Mood Rating Scale measurement was analyzed by means of Principal Component Analysis technique. Seven factors retained were anger, anxiety, hyperactivity, depression, bizarre behavior, suspicious behavior and emotional withdrawal. Cumulative percentage of each factor was 71.3%. The result of the test for hypotheses show as follows; (1) Main hypothesis, was not supported. The experimental group has 282 minutes of sleep as compared to the 255 minutes of sleep by the control group. Thus the sleep amount was higher in experimental group than in control group, however, the difference was not statistically significant at .05 level. (2) Main hypothesis 2 was not supported. The mean sleep amount of the experimental group and control group were 297 minutes and 278 minutes respectively Therefore, the experimental group had more sleep amount as compared to the control group, however, the difference was not statistically significant at .05 level. Thus, the main hypothesis 2 was not supported. (3) Sub-hypothesis 1 was not supported. The mean state anxiety of the experimental group and control group were 42.3, 43.9 in scores. Thus, the experimental group had slightly lower state anxiety level than control group, howe-ver, the difference was not statistically significant at .05 level. (4) Sub-hypothesis 2 was not supported. . The mean hormonal level of the experimental group and control group were 338 ㎍ and 440 ㎍ respectively. Thus, the experimental group showed decreased hormonal level than the control group, however, the difference was not statistically significant at .05 level. (5) Sub-hypothesis 3 was supported. The mean behavioral level of the experimental group and control group were 29.60 and 32.00 respectively in score. Thus, the experimental group showed lower behavioral change level than the control group. The difference was statistically significant at .05 level. In summary, the structured information did not influence the sleep amount, state anxiety or hormonal level of the subjects undergoing an open heart surgery at a statistically significant level, however, it showed a definite trends in their relationships, not least to mention its significant effect shown on behavioral change level. It can further be speculated that a great degree of individual differences in the variables such as sleep amount, state anxiety and fluctuation in hormonal level may partly be responsible for the statistical insensitivity to the experimentation.

  • PDF

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

Immunohistochemical Localization of NMDA Receptor in the Auditory Brain Stem of Postnatal 7, 16 Circling Mouse (생후 7일, 16일된 circling mouse 청각 뇌줄기에서 N-메틸-D 아스파르트산염 수용체(NMDA receptor)에 대한 면역염색학적 분포)

  • Choi, In-Young;Park, Ki-Sup;Kim, Hye-Jin;Maskey, Dhiraj;Kim, Myeung-Ju
    • Applied Microscopy
    • /
    • v.40 no.2
    • /
    • pp.53-64
    • /
    • 2010
  • Glutamate receptors may play a critical role in the refinement of developing synapses. The lateral superior olivary nucleus (LSO)-medial nucleus of trapezoid body (MNTB) synaptic transmission in the mammalian auditory brain stem mediate many excitatory transmitters such as glutamate, which is a useful model to study excitatory synaptic development. Hearing deficits are often accompanied by changes in the synaptic organization such as excitatory or inhibitory circuits as well as anatomical changes. Owing to this, circling mouse whose cochlea degenerates spontaneously after birth, is an excellent animal model to study deafness pathophysiology. However, little is known about the development regulation of the subunits composing these receptors in circling mouse. Thus, we used immunohistochemical method to compare the N-Methyl-D-aspartate receptor (NMDA receptor) NR1, NR2A, NR2B distribution in the LSO which project glutamergic excitatory input into the auditory brainstem, in circling mouse of postnatal (p) 7 and 16, which have spontaneous mutation in the inner ear, with wild-type mouse. The relative NMDAR1 immunoreactive density of the LSO in circling mouse p7 was $128.67\pm8.87$ in wild-type, $111.06\pm8.04$ in heterozygote, and $108.09\pm5.94$ in homozygote. The density of p16 circling mouse was $43.83\pm10.49$ in wild-type, $40\pm13.88$ in heterozygote, and $55.96\pm17.35$ in homozygote. The relative NMDAR2A immunoreactive density of LSO in circling mouse p7 was $97.97\pm9.71$ in wild-type, $102.87\pm9.30$ in heterozygote, and $106.85\pm5.79$ in homozygote. The density of LSO in p16 circling was $47.4\pm20.6$ in wild-type, $43.9\pm17.5$ in heterozygote, and $49.2\pm20.1$ in homozygote. The relative NMDAR2B immunoreactive density of LSO in circling mouse p7 was $109.04\pm6.77$ in wild-type, $106.43\pm10.24$ in heterozygote, and $105.98\pm4.10$ in homozygote. the density of LSO in p16 circling mouse was $101.47\pm11.5$ in wild-type, $91.47\pm14.81$ in heterozygote, and $93.93\pm15.71$ in homozygote. These results reveal alteration of NMDAR immunoreactivity in LSO of p7 and p16 circling mouse. The results of the present study are likely to be relevant to understand the central change underlying human hereditary deafness.

Impacts of Climate Change on Rice Production and Adaptation Method in Korea as Evaluated by Simulation Study (생육모의 연구에 의한 한반도에서의 기후변화에 따른 벼 생산성 및 적응기술 평가)

  • Lee, Chung-Kuen;Kim, Junwhan;Shon, Jiyoung;Yang, Woon-Ho;Yoon, Young-Hwan;Choi, Kyung-Jin;Kim, Kwang-Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.14 no.4
    • /
    • pp.207-221
    • /
    • 2012
  • Air temperature in Korea has increased by $1.5^{\circ}C$ over the last 100 years, which is nearly twice the global average rate during the same period. Moreover, it is projected that such change in temperature will continue in the 21st century. The objective of this study was to evaluate the potential impacts of future climate change on the rice production and adaptation methods in Korea. Climate data for the baseline (1971~2000) and the three future climate (2011~2040, 2041~2070, and 2071~2100) at fifty six sites in South Korea under IPCC SRES A1B scenario were used as the input to the rice crop model ORYZA2000. Six experimental schemes were carried out to evaluate the combined effects of climatic warming, $CO_2$ fertilization, and cropping season on rice production. We found that the average production in 2071~2100 would decrease by 23%, 27%, and 29% for early, middle, and middle-late rice maturing type, respectively, when cropping seasons were fixed. In contrast, predicted yield reduction was ~0%, 6%, and 7%, for early, middle, and middle-late rice maturing type, respectively, when cropping seasons were changed. Analysis of variation suggested that climatic warming, $CO_2$ fertilization, cropping season, and rice maturing type contributed 60, 10, 12, and 2% of rice yield, respectively. In addition, regression analysis suggested 14~46 and 53~86% of variations in rice yield were explained by grain number and filled grain ratio, respectively, when cropping season was fixed. On the other hand, 46~78 and 22~53% of variations were explained respectively with changing cropping season. It was projected that sterility caused by high temperature would have no effect on rice yield. As a result, rice yield reduction in the future climate in Korea would resulted from low filled grain ratio due to high growing temperature during grain-filling period because the $CO_2$ fertilization was insufficient to negate the negative effect of climatic warming. However, adjusting cropping seasons to future climate change may alleviate the rice production reduction by minimizing negative effect of climatic warming without altering positive effect of $CO_2$ fertilization, which improves weather condition during the grain-filling period.

Policy Direction for The Farmland Sizing Suitable to Regional Trait (지역특성을 반영한 영농규모화사업의 발전방향-충남지역을 중심으로-)

  • Shim, Jae-Sung
    • The Journal of Natural Sciences
    • /
    • v.14 no.1
    • /
    • pp.83-121
    • /
    • 2004
  • This study was carried out to examine how solid the production foundation of rice in Chung-Nam Province is, and, if not, to probe alternative measures through the size of farms specializing in rice, of which direction would be a pivot of rice industry-oriented policy. The results obtained can be summarized as follows : 1. The amount of rice production in Chung-Nam Province is highest in Korea and the size of paddy field area is the second largest : This implying that the probability that rice production in Chung-Nam Province would be severely influenced by a global trend of market conditions. The number of farms specializing in rice becoming the core group of rice farming account for 7.7 percent of the total number of farm household in Korea. Average field area financial support which had been input to farm household by Government had a noticeable effect on the improvement of the policy of farm-size program. 2. Farm-size program in Chung-Nam Province established from 1980 to 2002 in creased the cultivation size of paddy field to 19,484 hectares, and this program enhanced the buying and selling of farmland and the number of farmland bargain reached 6,431 household and 16,517 hectares, respectively, in 1995-2002. Meanwhile, long-term letting and hiring of farmland appeared so active that the bargain acreage reached 6,970 hectares, and farm involved was 7,059 households, however, the farm-exchange-and-unity program did not satisfy our expectation, because the retirement farm operators reluctantly participated to sell their farms. Another reason that had delayed the bargain of farms rested on the general category of social complication attendant upon the exchange and unity operation for scattered farm. Such difficulties would work negative effects out to carry on the target of farm-size work in general. 3. The following measures were presented to propel the farm-size promotion program : a. Occupation shift project, followed by the social security program for retirement and elderly farm operators, should be promptly established and also a number of types of incentives for promoting the letting and hiring work and farm-exchange-and-unity program would also be set up. b. To establish the effective key system of rice production, all the farm operators should increase the unit area yield of rice and lower the production cost. To do so, a great deal of production teams of rice equipped with managerial techniques and capabilities need to be organized. And, also, there should be appropriate arrays of facilities including information system. This plan is desirable to be in line with a diversity of the structural implement of regional integration based on farm system building. c. To extend the size of farm and to improve farm management, we have to devise the enlargement of individual size of farm for maximized management and the utilization of farm-size grouping method. In conclusion, it can be said that the farm-size project in Chung-Nam Province which has continued since the 1980s was satisfactorily achieved. However, we still have a lot of problems to be solved to break down the barrier for attainment of the desirable farm-size operation work.. Farm-size project has fairly close relation with farm specialization in rice and, thus, the positive support for farm household including the integrated program for both retirement farmers and off-farm operators should be considered to pursue the progressive development of the farm-size program, which is key means to successful achievement of rice farming enforcement in Chung-Nam Province.

  • PDF

A Thermal Time-Driven Dormancy Index as a Complementary Criterion for Grape Vine Freeze Risk Evaluation (포도 동해위험 판정기준으로서 온도시간 기반의 휴면심도 이용)

  • Kwon, Eun-Young;Jung, Jea-Eun;Chung, U-Ran;Lee, Seung-Jong;Song, Gi-Cheol;Choi, Dong-Geun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.8 no.1
    • /
    • pp.1-9
    • /
    • 2006
  • Regardless of the recent observed warmer winters in Korea, more freeze injuries and associated economic losses are reported in fruit industry than ever before. Existing freeze-frost forecasting systems employ only daily minimum temperature for judging the potential damage on dormant flowering buds but cannot accommodate potential biological responses such as short-term acclimation of plants to severe weather episodes as well as annual variation in climate. We introduce 'dormancy depth', in addition to daily minimum temperature, as a complementary criterion for judging the potential damage of freezing temperatures on dormant flowering buds of grape vines. Dormancy depth can be estimated by a phonology model driven by daily maximum and minimum temperature and is expected to make a reasonable proxy for physiological tolerance of buds to low temperature. Dormancy depth at a selected site was estimated for a climatological normal year by this model, and we found a close similarity in time course change pattern between the estimated dormancy depth and the known cold tolerance of fruit trees. Inter-annual and spatial variation in dormancy depth were identified by this method, showing the feasibility of using dormancy depth as a proxy indicator for tolerance to low temperature during the winter season. The model was applied to 10 vineyards which were recently damaged by a cold spell, and a temperature-dormancy depth-freeze injury relationship was formulated into an exponential-saturation model which can be used for judging freeze risk under a given set of temperature and dormancy depth. Based on this model and the expected lowest temperature with a 10-year recurrence interval, a freeze risk probability map was produced for Hwaseong County, Korea. The results seemed to explain why the vineyards in the warmer part of Hwaseong County have been hit by more freeBe damage than those in the cooler part of the county. A dormancy depth-minimum temperature dual engine freeze warning system was designed for vineyards in major production counties in Korea by combining the site-specific dormancy depth and minimum temperature forecasts with the freeze risk model. In this system, daily accumulation of thermal time since last fall leads to the dormancy state (depth) for today. The regional minimum temperature forecast for tomorrow by the Korea Meteorological Administration is converted to the site specific forecast at a 30m resolution. These data are input to the freeze risk model and the percent damage probability is calculated for each grid cell and mapped for the entire county. Similar approaches may be used to develop freeze warning systems for other deciduous fruit trees.

A Study on the Characteristics of Enterprise R&D Capabilities Using Data Mining (데이터마이닝을 활용한 기업 R&D역량 특성에 관한 탐색 연구)

  • Kim, Sang-Gook;Lim, Jung-Sun;Park, Wan
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.1-21
    • /
    • 2021
  • As the global business environment changes, uncertainties in technology development and market needs increase, and competition among companies intensifies, interests and demands for R&D activities of individual companies are increasing. In order to cope with these environmental changes, R&D companies are strengthening R&D investment as one of the means to enhance the qualitative competitiveness of R&D while paying more attention to facility investment. As a result, facilities or R&D investment elements are inevitably a burden for R&D companies to bear future uncertainties. It is true that the management strategy of increasing investment in R&D as a means of enhancing R&D capability is highly uncertain in terms of corporate performance. In this study, the structural factors that influence the R&D capabilities of companies are explored in terms of technology management capabilities, R&D capabilities, and corporate classification attributes by utilizing data mining techniques, and the characteristics these individual factors present according to the level of R&D capabilities are analyzed. This study also showed cluster analysis and experimental results based on evidence data for all domestic R&D companies, and is expected to provide important implications for corporate management strategies to enhance R&D capabilities of individual companies. For each of the three viewpoints, detailed evaluation indexes were composed of 7, 2, and 4, respectively, to quantitatively measure individual levels in the corresponding area. In the case of technology management capability and R&D capability, the sub-item evaluation indexes that are being used by current domestic technology evaluation agencies were referenced, and the final detailed evaluation index was newly constructed in consideration of whether data could be obtained quantitatively. In the case of corporate classification attributes, the most basic corporate classification profile information is considered. In particular, in order to grasp the homogeneity of the R&D competency level, a comprehensive score for each company was given using detailed evaluation indicators of technology management capability and R&D capability, and the competency level was classified into five grades and compared with the cluster analysis results. In order to give the meaning according to the comparative evaluation between the analyzed cluster and the competency level grade, the clusters with high and low trends in R&D competency level were searched for each cluster. Afterwards, characteristics according to detailed evaluation indicators were analyzed in the cluster. Through this method of conducting research, two groups with high R&D competency and one with low level of R&D competency were analyzed, and the remaining two clusters were similar with almost high incidence. As a result, in this study, individual characteristics according to detailed evaluation indexes were analyzed for two clusters with high competency level and one cluster with low competency level. The implications of the results of this study are that the faster the replacement cycle of professional managers who can effectively respond to changes in technology and market demand, the more likely they will contribute to enhancing R&D capabilities. In the case of a private company, it is necessary to increase the intensity of input of R&D capabilities by enhancing the sense of belonging of R&D personnel to the company through conversion to a corporate company, and to provide the accuracy of responsibility and authority through the organization of the team unit. Since the number of technical commercialization achievements and technology certifications are occurring both in the case of contributing to capacity improvement and in case of not, it was confirmed that there is a limit in reviewing it as an important factor for enhancing R&D capacity from the perspective of management. Lastly, the experience of utility model filing was identified as a factor that has an important influence on R&D capability, and it was confirmed the need to provide motivation to encourage utility model filings in order to enhance R&D capability. As such, the results of this study are expected to provide important implications for corporate management strategies to enhance individual companies' R&D capabilities.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF