• Title/Summary/Keyword: large

Search Result 62,718, Processing Time 0.095 seconds

Diagnostic Efficacy of FDG-PET Imaging in Solitary Pulmonary Nodule (고립성폐결절의 진단시 FDG-PET의 임상적 유용성에 관한 연구)

  • Cheon, Eun Mee;Kim, Byung-Tae;Kwon, O. Jung;Kim, Hojoong;Chung, Man Pyo;Rhee, Chong H.;Han, Yong Chol;Lee, Kyung Soo;Shim, Young Mog;Kim, Jhingook;Han, Jungho
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.6
    • /
    • pp.882-893
    • /
    • 1996
  • Background : Over one-third of solitary pulmonary nodules are malignant, but most malignant SPNs are in the early stages at diagnosis and can be cured by surgical removal. Therefore, early diagnosis of malignant SPN is essential for the lifesaving of the patient. The incidence of pulmonary tuberculosis in Korea is somewhat higher than those of other countries and a large number of SPNs are found to be tuberculoma. Most primary physicians tend to regard newly detected solitary pulmonary nodule as tuberculoma with only noninvasive imaging such as CT and they prefer clinical observation if the findings suggest benignancy without further invasive procedures. Many kinds of noninvasive procedures for confirmatory diagnosis have been introduced to differentiate malignant SPNs from benign ones, but none of them has been satisfactory. FOG-PET is a unique tool for imaging and quantifying the status of glucose metabolism. On the basis that glucose metabolism is increased in the malignant transfomled cells compared with normal cells, FDG-PET is considered to be the satisfactory noninvasive procedure which can differentiate malignant SPNs from benign SPNs. So we performed FOG-PET in patients with solitary pulmonary nodule and evaluated the diagnostic accuracy in the diagnosis of malignant SPNs. Method : 34 patients with a solitary pulmonary nodule less than 6 cm of irs diameter who visited Samsung Medical Center from Semptember, 1994 to Semptember, 1995 were evaluated prospectively. Simple chest roentgenography, chest computer tomography, FOG-PET scan were performed for all patients. The results of FOG-PET were evaluated comparing with the results of final diagnosis confirmed by sputum study, PCNA, fiberoptic bronchoscopy, or thoracotomy. Results : (I) There was no significant difference in nodule size between malignant (3.1 1.5cm) and benign nodule(2.81.0cm)(p>0.05). (2) Peal SUV(standardized uptake value) of malignant nodules (6.93.7) was significantly higher than peak SUV of benign nodules(2.71.7) and time-activity curves showed continuous increase in malignant nodules. (3) Three false negative cases were found among eighteen malignant nodule by the FDG-PET imaging study and all three cases were nonmucinous bronchioloalveolar carcinoma less than 2 em diameter. (4) FOG-PET imaging resulted in 83% sensitivity, 100% specificity, 100% positive predictive value and 84% negative predictive value. Conclusion: FOG-PET imaging is a new noninvasive diagnostic method of solitary pulmonary nodule thai has a high accuracy of differential diagnosis between malignant and benign nodule. FDG-PET imaging could be used for the differential diagnosis of SPN which is not properly diagnosed with conventional methods before thoracotomy. Considering the high accuracy of FDG-PET imaging, this procedure may play an important role in making the dicision to perform thoracotomy in diffcult cases.

  • PDF

Case Analysis of the Promotion Methodologies in the Smart Exhibition Environment (스마트 전시 환경에서 프로모션 적용 사례 및 분석)

  • Moon, Hyun Sil;Kim, Nam Hee;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.171-183
    • /
    • 2012
  • In the development of technologies, the exhibition industry has received much attention from governments and companies as an important way of marketing activities. Also, the exhibitors have considered the exhibition as new channels of marketing activities. However, the growing size of exhibitions for net square feet and the number of visitors naturally creates the competitive environment for them. Therefore, to make use of the effective marketing tools in these environments, they have planned and implemented many promotion technics. Especially, through smart environment which makes them provide real-time information for visitors, they can implement various kinds of promotion. However, promotions ignoring visitors' various needs and preferences can lose the original purposes and functions of them. That is, as indiscriminate promotions make visitors feel like spam, they can't achieve their purposes. Therefore, they need an approach using STP strategy which segments visitors through right evidences (Segmentation), selects the target visitors (Targeting), and give proper services to them (Positioning). For using STP Strategy in the smart exhibition environment, we consider these characteristics of it. First, an exhibition is defined as market events of a specific duration, which are held at intervals. According to this, exhibitors who plan some promotions should different events and promotions in each exhibition. Therefore, when they adopt traditional STP strategies, a system can provide services using insufficient information and of existing visitors, and should guarantee the performance of it. Second, to segment automatically, cluster analysis which is generally used as data mining technology can be adopted. In the smart exhibition environment, information of visitors can be acquired in real-time. At the same time, services using this information should be also provided in real-time. However, many clustering algorithms have scalability problem which they hardly work on a large database and require for domain knowledge to determine input parameters. Therefore, through selecting a suitable methodology and fitting, it should provide real-time services. Finally, it is needed to make use of data in the smart exhibition environment. As there are useful data such as booth visit records and participation records for events, the STP strategy for the smart exhibition is based on not only demographical segmentation but also behavioral segmentation. Therefore, in this study, we analyze a case of the promotion methodology which exhibitors can provide a differentiated service to segmented visitors in the smart exhibition environment. First, considering characteristics of the smart exhibition environment, we draw evidences of segmentation and fit the clustering methodology for providing real-time services. There are many studies for classify visitors, but we adopt a segmentation methodology based on visitors' behavioral traits. Through the direct observation, Veron and Levasseur classify visitors into four groups to liken visitors' traits to animals (Butterfly, fish, grasshopper, and ant). Especially, because variables of their classification like the number of visits and the average time of a visit can estimate in the smart exhibition environment, it can provide theoretical and practical background for our system. Next, we construct a pilot system which automatically selects suitable visitors along the objectives of promotions and instantly provide promotion messages to them. That is, based on the segmentation of our methodology, our system automatically selects suitable visitors along the characteristics of promotions. We adopt this system to real exhibition environment, and analyze data from results of adaptation. As a result, as we classify visitors into four types through their behavioral pattern in the exhibition, we provide some insights for researchers who build the smart exhibition environment and can gain promotion strategies fitting each cluster. First, visitors of ANT type show high response rate for promotion messages except experience promotion. So they are fascinated by actual profits in exhibition area, and dislike promotions requiring a long time. Contrastively, visitors of GRASSHOPPER type show high response rate only for experience promotion. Second, visitors of FISH type appear favors to coupon and contents promotions. That is, although they don't look in detail, they prefer to obtain further information such as brochure. Especially, exhibitors that want to give much information for limited time should give attention to visitors of this type. Consequently, these promotion strategies are expected to give exhibitors some insights when they plan and organize their activities, and grow the performance of them.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Breeding and Development of the Tscherskia triton in Jeju Island (제주도 서식 비단털쥐(Tscherskia triton)의 번식과 발달)

  • Park, Jun-Ho;Oh, Hong-Shik
    • Korean Journal of Environment and Ecology
    • /
    • v.31 no.2
    • /
    • pp.152-165
    • /
    • 2017
  • The greater long-tail hamster, Tscherskia triton, is widely distributed in Northern China, Korea and adjacent areas of Russia. Except for its distribution, biological characteristics related to life history, behavior, and ecological influences for this species are rarely studied in Korea. This study was conducted to obtain biological information on breeding, growth and development that are basic to species-specific studies. The study adopted laboratory management of a breeding programme for T. triton collected in Jeju Island from March, 2015 to December, 2016. According to the study results, the conception rate was 31.67% and the mice in the large cages had a higher rate of conception than those in the small cages (56.7 vs. 6.7%). The gestation period was $22{\pm}1.6days$ (ranges from 21 to27 days), and litter size ranged from 2 to 7, with a mean of $4.26{\pm}1.37$ in the species. The minimum age for weaning was between $19.2{\pm}1.4days$ (range of 18-21 days). There were no significant differences by sex between mean body weight and external body measurements at birth. However, a significant sexual difference was found from the period of weaning (21 days old) in head and body length, as well as tail length (HBL-weaning, $106.50{\pm}6.02$ vs. $113.34{\pm}4.72mm$, p<0.05; HBL-4 months, $163.93{\pm}5.42$ vs. $182.83{\pm}4.32mm$, p<0.05; TL-4 months, $107.23{\pm}3.25$ vs. $93.95{\pm}2.15mm$, p<0.05). Gompertz and Logistic growth curves were fitted to data for body weight and lengths of head and body, tail, ear, and hind foot. In two types of growth curves, males exhibited greater asymptotic values ($164.840{\pm}7.453$ vs. $182.830{\pm}4.319mm$, p<0.0001; $163.936{\pm}5.415$ vs. $182.840{\pm}4.333mm$, p<0.0001), faster maximum growth rates ($1.351{\pm}0.065$ vs. $1.435{\pm}0.085$, p<0.05; $2.870{\pm}0.253$ vs. $3.211{\pm}0.635$, p<0.05), and a later age of maximum growth than females in head and body length ($5.121{\pm}0.318$ vs. $5.520{\pm}0.333$, p<0.05; $6.884{\pm}0.336$ vs. $7.503{\pm}0.453$, p<0.05). However, females exhibited greater asymptotic values ($105.695{\pm}5.938$ vs. $94.150{\pm}2.507mm$, p<0.001; $111.609{\pm}14.881$ vs. $93.960{\pm}2.150mm$, p<0.05) and longer length of inflection ($60.306{\pm}1.992$ vs. $67.859{\pm}1.330mm$, p<0.0001; $55.714{\pm}7.458$ vs. $46.975{\pm}1.074mm$, p<0.05) than males in tail length. These growth rate constants, viz. the morphological characters and weights of the males and females, were similar to each other in two types of growth curves. These results will be used as necessary data to study species specificity of T. triton with biological foundations.

A Study on Industries's Leading at the Stock Market in Korea - Gradual Diffusion of Information and Cross-Asset Return Predictability- (산업의 주식시장 선행성에 관한 실증분석 - 자산간 수익률 예측 가능성 -)

  • Kim Jong-Kwon
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2004.11a
    • /
    • pp.355-380
    • /
    • 2004
  • I test the hypothesis that the gradual diffusion of information across asset markets leads to cross-asset return predictability in Korea. Using thirty-six industry portfolios and the broad market index as our test assets, I establish several key results. First, a number of industries such as semiconductor, electronics, metal, and petroleum lead the stock market by up to one month. In contrast, the market, which is widely followed, only leads a few industries. Importantly, an industry's ability to lead the market is correlated with its propensity to forecast various indicators of economic activity such as industrial production growth. Consistent with our hypothesis, these findings indicate that the market reacts with a delay to information in industry returns about its fundamentals because information diffuses only gradually across asset markets. Traditional theories of asset pricing assume that investors have unlimited information-processing capacity. However, this assumption does not hold for many traders, even the most sophisticated ones. Many economists recognize that investors are better characterized as being only boundedly rational(see Shiller(2000), Sims(2201)). Even from casual observation, few traders can pay attention to all sources of information much less understand their impact on the prices of assets that they trade. Indeed, a large literature in psychology documents the extent to which even attention is a precious cognitive resource(see, eg., Kahneman(1973), Nisbett and Ross(1980), Fiske and Taylor(1991)). A number of papers have explored the implications of limited information- processing capacity for asset prices. I will review this literature in Section II. For instance, Merton(1987) develops a static model of multiple stocks in which investors only have information about a limited number of stocks and only trade those that they have information about. Related models of limited market participation include brennan(1975) and Allen and Gale(1994). As a result, stocks that are less recognized by investors have a smaller investor base(neglected stocks) and trade at a greater discount because of limited risk sharing. More recently, Hong and Stein(1999) develop a dynamic model of a single asset in which information gradually diffuses across the investment public and investors are unable to perform the rational expectations trick of extracting information from prices. Hong and Stein(1999). My hypothesis is that the gradual diffusion of information across asset markets leads to cross-asset return predictability. This hypothesis relies on two key assumptions. The first is that valuable information that originates in one asset reaches investors in other markets only with a lag, i.e. news travels slowly across markets. The second assumption is that because of limited information-processing capacity, many (though not necessarily all) investors may not pay attention or be able to extract the information from the asset prices of markets that they do not participate in. These two assumptions taken together leads to cross-asset return predictability. My hypothesis would appear to be a very plausible one for a few reasons. To begin with, as pointed out by Merton(1987) and the subsequent literature on segmented markets and limited market participation, few investors trade all assets. Put another way, limited participation is a pervasive feature of financial markets. Indeed, even among equity money managers, there is specialization along industries such as sector or market timing funds. Some reasons for this limited market participation include tax, regulatory or liquidity constraints. More plausibly, investors have to specialize because they have their hands full trying to understand the markets that they do participate in

  • PDF

The Effect of Two Terpenoids, Ursolic Acid and Oleanolic Acid on Epidermal Permeability Barrier and Simultaneously on Dermal Functions (우솔릭산과 올레아놀산이 피부장벽과 진피에 미치는 영향에 대한 연구)

  • Suk Won, Lim;Sung Won, Jung;Sung Ku, Ahn;Bora, Kim;In Young, Kim;Hee Chang , Ryoo;Seung Hun, Lee
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.30 no.2
    • /
    • pp.263-278
    • /
    • 2004
  • Ursolic acid (UA) and Oleanolic acid (ONA), known as urson, micromerol and malol, are pentacyclic triterpenoid compounds which naturally occur in a large number of vegetarian foods, medicinal herbs, and plants. They may occur in their free acid form or as aglycones for triterpenoid saponins, which are comprised of a triterpenoid aglycone, linked to one or more sugar moieties. Therefore UA and ONA are similar in pharmacological activity. Lately scientific research, which led to the identification of UA and ONA, revealed that several pharmacological effects, such as antitumor, hepato-protective, anti-inflammatory, anticarcinogenic, antimicrobial, and anti-hyperlipidemic could be attributed to UA and ONA. Here, we introduced the effect of UA and ONA on acutely barrier disrupted and normal hairless mouse skin. To evaluate the effects of UA and ONA on epidermal permeability barrier recovery, both flanks of 8-12 week-old hairless mice were topically treated with either 0.01-0.1mg/mL UA or 0.1-1mg/mL ONA after tape stripping, and TEWL (transepidermal water loss) was measured. The recovery rate increased in those UA or ONA treated groups (0.1mg/mL UA and 0.5mg/mL ONA) at 6h more than 20% compared to vehicle treated group (p < 0.05). Here, we introduced the effects of UA and ONA on acute barrier disruption and normal epidermal permeability barrier function. For verifying the effects of UA and ONA on normal epidermal barrier, hydration and TEWL were measured for 1 and 3 weeks after UA and ONA applications (2mg/mL per day). We also investigated the features of epidermis and dermis using electron microscopy (EM) and light microscopy (LM). Both samples increased hydration compared to vehicle group from 1 week without TEWL alteration (p < 0.005). EM examination using RuO4 and OsO4 fixation revealed that secretion and numbers of lamellar bodies and complete formation of lipid bilayers were most prominent (ONA=UA > vehicle). LM finding showed that thickness of stratum corneum (SC) was slightly increased and especially epidermal thickening and flattening was observed (UA > ONA > vehicle). We also observed that UA and ONA stimulate epidermal keratinocyte differentiation via PPAR Protein expression of involucrin, loricrin, and filaggrin increased at least 2 and 3 fold in HaCaT cells treated with either ONA (10${\mu}$M) or UA (10${\mu}$M) for 24 h respectively. This result suggested that the UA and ONA can improve epidermal permeability barrier function and induce the epidermal keratinocyte differentiation via PPAR Using Masson-trichrome and elastic fiber staining, we observed collagen thickening and elastic fiber elongation by UA and ONA treatments. In vitro results of collagen and elastin synthesis and elastase inhibitory activity measurements were also confirmed in vivo findings. These data suggested that the effects of UA and ONA related to not only epidermal permeability barrier functions but also dermal collagen and elastic fiber synthesis. Taken together, UA and ONA can be relevant candidates to improve epidermal and dermal functions and pertinent agents for cosmeseutical applications.

The Study about 「The Discourse on the Constitutional Symptoms and Diseases」 of Sasangin on the 『Dongyi Suse Bowon』 (『동의수세보원(東醫壽世保元)』 태소음양인(太少陰陽人)의 「병증론(病證論)」에 관(關)한 연구(硏究))

  • Lee, Su-kyung;Song, Il-byung
    • Journal of Sasang Constitutional Medicine
    • /
    • v.11 no.2
    • /
    • pp.1-26
    • /
    • 1999
  • This paper was written in order to understand each constitutional symptoms and diseases with two aspects. The first was to trace the courses to accomplish constitutional symptoms and diseases from that of oriented medicine through "Dongyi Bogam" and the original writing such as "Shanghanlun". The second was to analyze the constitutional diseases with Lee Je-ma's own recognition on human being and the society which was based on the "Dongyi Suse Bowon". The original concepts of 'The Interior Disease' and 'The Exterior Disease' were based on the Nature and the Emotion, the Environmental Frames and the Human Affairs, the Ears Eyes Nose Mouth and the Lung Spleen Liver Kidney. The exterior disease were caused by the abilities of ears to listen, eyes to see, nose to smell, and mouth to taste on the environmental frames which were related one's recognition to society. The interior diseases were caused by the abilities of lung to study, spleen to ask, liver to think, kidney to judge on human affairs which were related the relationship between me and others. So the titles of constitutional diseases were named by these views on his first writing of "Dongyi Suse Bowon" in 1894. So the titles of Taeyangin diseases, 'The lumbar Vertebae Disease Induced by Exopathogen' and 'The Small Intestine Disease Induced by Endopathogen' were still remained as the first writing. But the titles of constitutional diseases were rewritten such as present titles in 1900. In order to express pathology and mechanism of constitutional diseases exactly, he rewrote titles which contained the manifestation sites of diseases, and the symptoms of febrile and cold, and the different congenital formations of organs. The exterior diseases and interior diseases had three characteristics. The first was that the exterior disease injured by the nature which had a tendency to progress slowly and the interior disease injured by the emotion which had a tendency to progress rapidly. The second was not that the interior disease and the exterior disease were separated, but that one influenced the other and these were revealed as a disease together when the diseases continued for a long time. The third was that even though the disease caught together it was included the beginning disease. The symptoms in ordinary times was the origin and clue to recognize the constitutional symptoms and diseases. It enabled to establish the constitutional medicine which treated by different ways according to constitution. It had two characteristics which were different from the Traditional Chinese Medicine in appearance of diseases. The first was that the disease was progressed to the next step from the symptoms in ordinary times. The second was that each constitution had different symptoms which were due to symptoms in ordinary times under the same disease, The third was the manifestation of disease were different from symptoms in ordinary times in the same constitution. But the most important thing was that Lee Je-ma recognized these symptoms in ordinary times as four categories and he presented constitutional symptoms and constitutional disease. The four categories were the method to recognize the human being and the diseases for him As the symptoms and diseases of Sasang Constitutional Medicine were compared to Traditional Chinese Medicine, the constitutional diseases of "Dongyi Suse Bowon" could be classified into two groups. The first group was the unique diseases and symptoms, which were not in the Traditional Chinese Medicine, and which were established by the Lee Je-ma. These contained the diseases of taeyangin, the exterior disease of taeumin, the exterior disease of soyangin. The second group used the unique methods to treat disease, which were not in Traditional Chinese Medicine, and which were established by Lee Je-ma. This contained the interior disease of taeumin, the delirium diseases from the MangYin of soyangin, the treatment to help the Yang-Qi ascend and to supplement the ql In the exterior disease of soeumin. Especially, the diseases of taeyangin and taeumin which were caused by the metabolism disorders of Qi-Yack(氣液) were the great achievement to establish constitutional symptoms and diseases. The discourse of taeyangin diseases presented his original thought to recognize the symptoms and diseases through the Shin Gi Hyul Jeong(神氣血精) and the Qi-Yack, the discourse of taeumin diseases presented the disperse of Qi-Yack through the forward and backward of sweat, the discourse of soyangin disease presented the sweat of hand and feet which was manifested that yin-qi of spleen descended to yin qi of kidney, and the bowel movement which was manifested that yang qi of large intestine ascend to head, face and four extremities, the discourse of soeumin disease presented the Jueyin syndrome without the abdominal pain and diarrhea as the exterior disease and made importance to the nervous mind And the classification of exterior diseases and interior diseases were not due to the pharmacology but due to the symptoms and diseases according to the constitution.

  • PDF