Research and Development Trends on Omega-3 Fatty Acid Fortified Foodstuffs (오메가 3계 지방산 강화 식품류의 연구개발 동향)
-
- Journal of the Korean Society of Food Science and Nutrition
- /
- v.26 no.1
- /
- pp.161-174
- /
- 1997
Omega-3 fatty acids have been major research interests in medical and nutritional science relating to life sciences since after the epidemiologic data on Green3and Eskimos reported by several researchers clearly showed fewer per capita deaths from heart diseases and a lower incidence of adult diseases. Linolenic acid(LNA) is an essential fatty acid for human beings as well as linoleic acid(LA) due to the fact that vertebrates lack an enzyme required to incorporate a double bond beyond carbon 9 in the chain. In addition the ratio of omega-6 and 3 fatty acids seems to be important in terms of alleviation of heart diseases since LA and LNA competes for the metabolic pathways of eicosanoids synthesis. High consumption of omega-3 fatty acids in seafoods may control heart diseases by reducing blood cholesterol, triglyceride, VLDL, LDL and increasing HDL and by inhibiting plaque development through the formation of antiaggregatory substances like PGI
The scarcity is pervasive aspect of human life and is a fundamental precondition of economic behavior of consumers. Also, the effect of scarcity message is a power social influence principle used by marketers to increase the subjective desirability of products. Because valuable objects are often scare, consumers tend to infer the scarce objects are valuable. Marketers often do base promotional appeals on the principle of scarcity to increase the subjective desirability their products among consumers. Specially, advertisers and retailers often promote their products using restrictions. These restriction act to constraint consumers' ability th take advantage of the promotion and can assume several forms. For example, some promotions are advertised as limited time offers, while others limit the quantity that can be bought at the deal price by employing the statements such as 'limit one per consumer,' 'limit 5 per customer,' 'limited products for special commemoration celebration,' Some retailers use statements extensively. A recent weekly flyer by a prominent retailer limited purchase quantities on 50% of the specials advertised on front page. When consumers saw these phrase, they often infer value from the product that has limited availability or is promoted as being scarce. But, the past researchers explored a direct relationship between the purchase quantity and time limit on deal purchase intention. They also don't explored that all restriction message are not created equal. Namely, we thought that different restrictions signal deal value in different ways or different mechanism. Consumers appear to perceive that time limits are used to attract consumers to the brand, while quantity limits are necessary to reduce stockpiling. This suggests other possible differences across restrictions. For example, quantity limits could imply product quality (i.e., this product at this price is so good that purchases must be limited). In contrast, purchase preconditions force the consumer to spend a certain amount to qualify for the deal, which suggests that inferences about the absolute quality of the promoted item would decline from purchase limits (highest quality) to time limits to purchase preconditions (lowest quality). This might be expected to be particularly true for unfamiliar brands. However, a critical but elusive issue in scarcity message research is the impacts of a inferred motives on the promoted scarcity message. The past researchers not explored possibility of inferred motives on the scarcity message context. Despite various type to the quantity limits message, they didn't separated scarcity message among the quantity limits. Therefore, we apply a stricter definition of scarcity message(i.e. quantity limits) and consider scarcity message type(general scarcity message vs. special scarcity message), scarcity depth(high vs. low). The purpose of this study is to examine the effect of the scarcity message on the consumer's purchase intension. Specifically, we investigate the effect of general versus special scarcity messages on the consumer's purchase intention using the level of the scarcity depth as moderators. In other words, we postulates that the scarcity message type and scarcity depth play an essential moderating role in the relationship between the inferred motives and purchase intention. In other worlds, different from the past studies, we examine the interplay between the perceived motives and scarcity type, and between the perceived motives and scarcity depth. Both of these constructs have been examined in isolation, but a key question is whether they interact to produce an effect in reaction to the scarcity message type or scarcity depth increase. The perceived motive Inference behind the scarcity message will have important impact on consumers' reactions to the degree of scarcity depth increase. In relation ti this general question, we investigate the following specific issues. First, does consumers' inferred motives weaken the positive relationship between the scarcity depth decrease and the consumers' purchase intention, and if so, how much does it attenuate this relationship? Second, we examine the interplay between the scarcity message type and the consumers' purchase intention in the context of the scarcity depth decrease. Third, we study whether scarcity message type and scarcity depth directly affect the consumers' purchase intention. For the answer of these questions, this research is composed of 2(intention inference: existence vs. nonexistence)
The study examines the relationships among employee's goal orientation, IT personnel competency, personal effectiveness. The goal orientation includes learning goal orientation, performance approach goal orientation, and performance avoid goal orientation. Personal effectiveness consists of personal work satisfaction and personal work performance. In general, IT personnel competency refers to IT expert's skills, expertise, and knowledge required to perform IT activities in organizations. However, due to the advent of the internet and the generalization of IT, IT personnel competency turns out to be an important competency of technological experts as well as employees in organizations. While the competency of IT itself is important, the appropriate harmony between IT personnel's business capability and technological capability enhances the value of human resources and thus provides organizations with sustainable competitive advantages. The rapid pace of organization change places increased pressure on employees to continually update their skills and adapt their behavior to new organizational realities. This challenge raises a number of important questions concerning organizational behavior? Why do some employees display remarkable flexibility in their behavioral responses to changes in the organization, whereas others firmly resist change or experience great stress when faced with the need to alter behavior? Why do some employees continually strive to improve themselves over their life span, whereas others are content to forge through life using the same basic knowledge and skills? Why do some employees throw themselves enthusiastically into challenging tasks, whereas others avoid challenging tasks? The goal orientation proposed by organizational psychology provides at least a partial answer to these questions. Goal orientations refer to stable personally characteristics fostered by "self-theories" about the nature and development of attributes (such as intelligence, personality, abilities, and skills) people have. Self-theories are one's beliefs and goal orientations are achievement motivation revealed in seeking goals in accordance with one's beliefs. The goal orientations include learning goal orientation, performance approach goal orientation, and performance avoid goal orientation. Specifically, a learning goal orientation refers to a preference to develop the self by acquiring new skills, mastering new situations, and improving one's competence. A performance approach goal orientation refers to a preference to demonstrate and validate the adequacy of one's competence by seeking favorable judgments and avoiding negative judgments. A performance avoid goal orientation refers to a preference to avoid the disproving of one's competence and to avoid negative judgements about it, while focusing on performance. And the study also examines the moderating role of work career of employees to investigate the difference in the relationship between IT personnel competency and personal effectiveness. The study analyzes the collected data using PASW 18.0 and and PLS(Partial Least Square). The study also uses PLS bootstrapping algorithm (sample size: 500) to test research hypotheses. The result shows that the influences of both a learning goal orientation (
Background: Alteration of p53 tumor suppressor genes is most frequently identified in human neoplasms, including lung carcinoma. It is well known that bcl-2 oncoprotein protects cells from apoptosis. Recent studies have demonstrated that bcl-2 expression is associated with favorable prognosis for patients with non-small cell lung carcinoma. However, the precise biologic role of bcl-2 in the development of these tumors is still obscure. p53 and bcl-2 have important regulatory influence in the apoptotic pathway and thus their relationship is of interest in tumorigenesis, especially lung cancer. Purpose: The author investigated to know the prognostic significance of the expression of p53 and bcl-2 in radically resected non-small cell lung cancer. Method: 84 cases of formalin-fixed paraffin-embedded blocks from resected primary non-small cell lung cancer from 1980 to 1994 at Hanyang University Hospital were available for both clinical follow-up and immunohistochemical staining using monoclonal antibodies for p53 and bcl-2. Results : The histologic classification of the tumor was based on WHO criteria., and the specimens included 45 squamous cell carcinomas(53.6%), 28 adeonocarcinomas(33.3%) and 11 large cell carcinomas(13.1 %). p53 immunoreactivity was noted in 47 cases of 84 cases(56.0%). bcl-2 immunoreactivity was noted in 15 cases of 84 cases(17.9%). The mean survival duration was
Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.
Kim Jeong-gi (pen-name: Changsan, Mar. 31, 1930 - Aug. 26, 2015) made a major breakthrough in the history of cultural property excavation in Korea: In 1959, he began to develop an interest in cultural heritage after starting work as an employee of the National Museum of Korea. For about thirty years until he retired from the National Research Institute of Cultural Heritage in 1987, he devoted his life to the excavation of our country's historical relics and artifacts and compiled countless data about them. He continued striving to identify the unique value and meaning of our cultural heritage in universities and excavation organizations until he passed away in 2015. Changsan spearheaded all of Korea's monumental archeological excavations and research. He is widely known at home and abroad as a scholar of Korean archeology, particularly in the early years of its existence as an academic discipline. As such, he has had a considerable influence on the development of Korean archeology. Although his multiple activities and roles are meaningful in terms of the country's archaeological history, there are limits to his contributions nevertheless. The Deoksugung Palace period (1955-1972), when the National Museum of Korea was situated in Deoksugung Palace, is considered to be a time of great significance for Korean archeology, as relics with diverse characteristics were researched during this period. Changsan actively participated in archeological surveys of prehistoric shell mounds and dwellings, conducted surveys of historical relics, measured many historical sites, and took charge of photographing and drawing such relics. He put to good use all the excavation techniques that he had learned in Japan, while his countrywide archaeological surveys are highly regarded in terms of academic history as well. What particularly sets his perspectives apart in archaeological terms is the fact that he raised the possibility of underwater tombs in ancient times, and also coined the term "Haemi Culture" as part of a theory of local culture aimed at furthering understanding of Bronze Age cultures in Korea. His input was simply breathtaking. In 1969, the National Research Institute of Cultural Heritage (NRICH) was founded and Changsan was appointed as its head. Despite the many difficulties he faced in running the institute with limited financial and human resources, he gave everything he had to research and field studies of the brilliant cultural heritages that Korea has preserved for so long. Changsan succeeded in restoring Bulguksa Temple, and followed this up with the successful excavation of the Cheonmachong Tomb and the Hwangnamdaechong Tomb in Gyeongju. He then explored the Hwangnyongsa Temple site, Bunhwangsa Temple, and the Mireuksa Temple site in order to systematically evaluate the Buddhist culture and structures of the Three Kingdoms Period. We can safely say that the large excavation projects that he organized and carried out at that time not only laid the foundations for Korean archeology but also made significant contributions to studies in related fields. Above all, in terms of the developmental process of Korean archeology, the achievements he generated with his exceptional passion during the period are almost too numerous to mention, but they include his systematization of various excavation methods, cultivation of archaeologists, popularization of archeological excavations, formalization of survey records, and promotion of data disclosure. On the other hand, although this "Excavation King" devoted himself to excavations, kept precise records, and paid keen attention to every detail, he failed to overcome the limitations of his era in the process of defining the nature of cultural remains and interpreting historical sites and structures. Despite his many roles in Korean archeology, the fact that he left behind a controversy over the identity of the occupant of the Hwangnamdaechong Tomb remains a sore spot in his otherwise perfect reputation.
In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.
Thanks to the rapid development of information technologies, the data available on Internet have grown rapidly. In this era of big data, many studies have attempted to offer insights and express the effects of data analysis. In the tourism and hospitality industry, many firms and studies in the era of big data have paid attention to online reviews on social media because of their large influence over customers. As tourism is an information-intensive industry, the effect of these information networks on social media platforms is more remarkable compared to any other types of media. However, there are some limitations to the improvements in service quality that can be made based on opinions on social media platforms. Users on social media platforms represent their opinions as text, images, and so on. Raw data sets from these reviews are unstructured. Moreover, these data sets are too big to extract new information and hidden knowledge by human competences. To use them for business intelligence and analytics applications, proper big data techniques like Natural Language Processing and data mining techniques are needed. This study suggests an analytical approach to directly yield insights from these reviews to improve the service quality of hotels. Our proposed approach consists of topic mining to extract topics contained in the reviews and the decision tree modeling to explain the relationship between topics and ratings. Topic mining refers to a method for finding a group of words from a collection of documents that represents a document. Among several topic mining methods, we adopted the Latent Dirichlet Allocation algorithm, which is considered as the most universal algorithm. However, LDA is not enough to find insights that can improve service quality because it cannot find the relationship between topics and ratings. To overcome this limitation, we also use the Classification and Regression Tree method, which is a kind of decision tree technique. Through the CART method, we can find what topics are related to positive or negative ratings of a hotel and visualize the results. Therefore, this study aims to investigate the representation of an analytical approach for the improvement of hotel service quality from unstructured review data sets. Through experiments for four hotels in Hong Kong, we can find the strengths and weaknesses of services for each hotel and suggest improvements to aid in customer satisfaction. Especially from positive reviews, we find what these hotels should maintain for service quality. For example, compared with the other hotels, a hotel has a good location and room condition which are extracted from positive reviews for it. In contrast, we also find what they should modify in their services from negative reviews. For example, a hotel should improve room condition related to soundproof. These results mean that our approach is useful in finding some insights for the service quality of hotels. That is, from the enormous size of review data, our approach can provide practical suggestions for hotel managers to improve their service quality. In the past, studies for improving service quality relied on surveys or interviews of customers. However, these methods are often costly and time consuming and the results may be biased by biased sampling or untrustworthy answers. The proposed approach directly obtains honest feedback from customers' online reviews and draws some insights through a type of big data analysis. So it will be a more useful tool to overcome the limitations of surveys or interviews. Moreover, our approach easily obtains the service quality information of other hotels or services in the tourism industry because it needs only open online reviews and ratings as input data. Furthermore, the performance of our approach will be better if other structured and unstructured data sources are added.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70