• Title/Summary/Keyword: Study ability

Search Result 17,963, Processing Time 0.052 seconds

Visual Media Education in Visual Arts Education (미술교육에 있어서 시각적 미디어를 통한 조형교육에 관한 연구)

  • Park Ji-Sook
    • Journal of Science of Art and Design
    • /
    • v.7
    • /
    • pp.64-104
    • /
    • 2005
  • Visual media transmits image and information reproduced in large quantities, such as a photography, film, television, video, advertisement, or computer image. Correspondence to the students' reception and recognition of culture in the future. arrangements for the field of studies of visual culture. 'Visual Culture' implies cultural phenomena of visual images via visual media, which includes not only the categories of traditional arts like a painting, sculpture, print, or design, but the performance arts including a fashion show or parade of carnival, and the mass and electronic media like a photography, film, television, video, advertisement, cartoon, animation, or computer image. In the world of visual media, Image' functions as an essential medium of communication. Therefore, people call the culture of today fra of Image Culture', which has been converted from an alphabet convergence era to an image convergence one. Image, via visual media, has become a dominant means for communication in large part of human life, so we can designate an Image' as a typical aspect of visual culture today. Image, as an essential medium of communication, plays an important role in contemporary society. The one way is the conversion of analogue image like an actual picture, photograph, or film into digital one through the digitalization of digital camera or scanner as 'an analogue/digital commutator'. The other is a way of process with a computer drawing, or modeling of objects. It is appropriate to the production of pictorial and surreal images. Digital images, produced by the other, can be divided into the form of Pixel' and form of Vector'. Vector is a line linking the point of departure to the point of end, which organizes informations. Computer stores each line's standard location and correlative locations to one another Digital image shows for more 'Perfectness' than any other visual media. Digital image has been evolving in the diverse aspects, such as a production of geometrical or organic image compositing, interactive art, multimedia art, or web art, which has been applied a computer as an extended trot of painting. Someone often interprets digitalized copy with endless reproduction of original even as an extension of a print. Visual af is no longer a simple activity of representation by a painter or sculptor, but now is intimately associated with a matter of application of media. There is some problem in images via visual media. First, the image via media doesn't reflect a reality as it is, but reflects an artificial manipulated world, that is, a virtual reality. Second, the introduction of digital effect and the development of image processing technology have enhanced a spectacle of destructive and violent scenes. Third, a child intends to recognize the interactive images of computer game and virtual reality as a reality, or truth. Education needs not only to point out an ill effect of mass media and prevent the younger generation from being damaged by it, but also to offer a knowledge and know-how to cope actively with social, cultural circumstances. Visual media education is one of these essential methods for the contemporary and future human being in the overflowing of image informations. The fosterage of 'Visual Literacy' can be considered as a very purpose of visual media education. This is a way to lead an individual to the discerning, active consumer and producer of visual media in life as far as possible. The elements of 'Visual Literacy' can be divided into a faculty of recognition related to the visual media, a faculty of critical reception, a faculty of appropriate application, a faculty of active work and a faculty of creative modeling, which are promoted at the same time by the education of 'visual literacy'. In conclusion, the education of 'Visual Literacy' guides students to comprehend and discriminate the visual image media carefully, or receive them critically, apply them properly, or produce them creatively and voluntarily. Moreover, it leads to an artistic activity by means of new media. This education can be approached and enhanced by the connection and integration with real life. Visual arts and education of them play an important role in the digital era depended on visual communications via image information. Visual me야a of day functions as an essential element both in daily life and in arts. Students can soundly understand visual phenomena of today by means of visual media, and apply it as an expression tool of life culture as well. A new recognition and valuation visual image and media education is required to cultivate the capability of active, upright dealing with the changes of history of civilization. 1) Visual media education helps to cultivate a sensibility for images, which reacts to and deals with the circumstances. 2) It helps students to comprehend the contemporary arts and culture via new media. 3) It supplies a chance of students' experiencing a visual modeling by means of new media. 4) There are educational opportunities of images with temporality and spaciality, and therefore a discerning person becomes to increase. 5) The modeling activity via new media leads students to be continuously interested in the school and production of plastic arts. 6) It raises the ability of visual communications dealing with image information society. 7) An education of digital image is significant in respect of cultivation of man of talent for the future society of image information as well. To correspond to the changing and developing social, cultural circumstances, and the form and recognition of students' reception of them, visual arts education must arrange the field of studying on a new visual culture. Besides, a program needs to be developed, which is in more systematic and active level in relation to visual media education. Educational contents should be extended to the media for visual images, that is, photography, film, television, video, computer graphic, animation, music video, computer game and multimedia. Every media must be separately approached, because they maintain the modes and peculiarities of their own according to the conveyance form of message. The concrete and systematic method of teaching and the quality of education must be researched and developed, centering around the development of a course of study. Teacher's foundational capability of teaching should be cultivated for the visual media education. In this case, it must be paid attention to the fact that a technological level of media is considered as a secondary. Because school education doesn't intend to train expert and skillful producers, but intends to lay stress on the essential aesthetic one with visual media under the social and cultural context, in respect of a consumer including a man of culture.

  • PDF

In Vitro Fertilization of Pig Oocytes Matured In­Vitro by liquid Boar Spermatozoa (체외성숙 돼지 난포란의 액상정액을 이용한 체외수정)

  • 박창식;이영주
    • Korean Journal of Animal Reproduction
    • /
    • v.26 no.1
    • /
    • pp.17-23
    • /
    • 2002
  • The present study was carried out to investigate the effects of the maturation media such as a modified TCM-199 (mTCM-199) medium, modified Waymouth MB 752/1 (mWaymouth MB 752/1) medium or NCSU-23 medium on penetrability of pig oocytes by liquid boar sperm. Oocytes (30~40) were transferred into each well of a Nunc 4-well multidish containing 0.5 $m\ell$ maturation medium. When immature pig oocytes were cultured in mTCM-199, mWaymouth MB 752/1 and NCSU-23 maturation media for 44 h in 5% $CO_2$, in air at 38.5$^{\circ}C$, the germinal vesicle breakdown (CVBD) rates of the oocytes were 95.6, 94.1 and 94.9%, respectively, and the maturation rates (metaphase II) of oocytes were 92.5, 90.1 and 91.1%, respectively. No differences were observed among the maturation media. The spermrich portion of ejaculates with greater than 90% motile sperm were used in the experiment. The semen was cooled 22 to 24$^{\circ}C$ over 2 h period. The semen was diluted with Beltsville Thawing Solution (BTS) extender at room temperature to give 2$\times$10$^{8}$ sperm/$m\ell$ in 100 $m\ell$ plastic bottle. Liquid boar semen of 30 $m\ell$ in 100 $m\ell$ plastic bottle was kept at 17$^{\circ}C$ for 5 days. The sperm with greater than 70% motility after day 5 of storage were used for in-vitro fertilization (IVF). After 44 h maturation of immature oocytes, cumulus cells were removed and oocytes (30~40) coincubated far 6 h in 0.5 $m\ell$ mTCM-199 and mTBM fertilization media with 2$\times$1061$m\ell$ sperm concentration. At 6 h after IVF, oocytes were transferred into 0.5 $m\ell$ mTCM-199 and NCSU-23 culture media for further culture 6 or 42 h. Sperm penetration, polyspermy and male pronuclear formation of oocytes at 12 h after IVF, and developmental ability of oocytes at 48 h after IVF were evaluated. The oocytes in combination with NCSU-23 medium for maturation and mTBM medium for IVF increased male pronuclear formation (48.0%) compared to those in combination with mTCM-199 media for maturation and IVF, and mWaymouth MB 752il medium for maturation and mTCM-199 medium far IVF. The rates of cleaved embryos (2~4 cell stage) at 48 h after IVF were 24.1% in combination with mTCM-199 media for maturation, IVF and culture, 43.6% in combination with mWaymouth MB 75211 medium fur maturation and mTCM-199 media for IVF and culture, and 71.2% in combination with NCSU-23 medium for maturation, mTBM medium for IVF and NCSU-23 medium for culture. In conclusion, we found out the oocytes matured in vitro were fertilized by liquid boar sperm stored in BTS extender at 17$^{\circ}C$ for 5 days. We recommend the simple defined NCSU-23 medium for nuclear maturation, mTBM medium and liquid boar sperm for IVF, and NCSU-23 medium for embryo culture.

The Changing Aspects of North Korea's Terror Crimes and Countermeasures : Focused on Power Conflict of High Ranking Officials after Kim Jong-IL Era (북한 테러범죄의 변화양상에 따른 대응방안 -김정일 정권 이후 고위층 권력 갈등을 중심으로)

  • Byoun, Chan-Ho;Kim, Eun-Jung
    • Korean Security Journal
    • /
    • no.39
    • /
    • pp.185-215
    • /
    • 2014
  • Since North Korea has used terror crime as a means of unification under communism against South Korea, South Korea has been much damaged until now. And the occurrence possibility of terror crime by North Korean authority is now higher than any other time. The North Korean terror crimes of Kim Il Sung era had been committed by the dictator's instruction with the object of securing governing fund. However, looking at the terror crimes committed for decades during Kim Jung Il authority, it is revealed that these terror crimes are expressed as a criminal behavior because of the conflict to accomplish the power and economic advantage non powerful groups target. This study focused on the power conflict in various causes of terror crimes by applying George B. Vold(1958)'s theory which explained power conflict between groups became a factor of crime, and found the aspect by ages of terror crime behavior by North Korean authority and responding plan to future North Korean terror crime. North Korean authority high-ranking officials were the Labor Party focusing on Juche Idea for decades in Kim Il Sung time. Afterwards, high-ranking officials were formed focusing on military authorities following Military First Policy at the beginning of Kim Jung Il authority, rapid power change has been done for recent 10 years. To arrange the aspect by times of terror crime following this power change, alienated party executives following the support of positive military first authority by Kim Jung Il after 1995 could not object to forcible terror crime behavior of military authority, and 1st, 2nd Yeongpyeong maritime war which happened this time was propelled by military first authority to show the power of military authority. After 2006, conservative party union enforced censorship and inspection on the trade business and foreign currency-earning of military authority while executing drastic purge. The shooting on Keumkangsan tourists that happened this time was a forcible terror crime by military authority following the pressure of conservative party. After October, 2008, first military reign union executed the launch of Gwanmyungsung No.2 long-range missile, second nuclear test, Daechung marine war, and Cheonanham attacking terror in order to highlight the importance and role of military authority. After September 2010, new reign union went through severe competition between new military authority and new mainstream and new military authority at this time executed highly professionalized terror crime such as cyber/electronic terror unlike past military authority. After July 2012, ICBM test launch, third nuclear test, cyber terror on Cheongwadae homepage of new mainstream association was the intention of Km Jung Eun to display his ability and check and adjust the power of party/military/cabinet/ public security organ, and he can attempt the unexpected terror crime in the future. North Korean terror crime has continued since 1980s when Kim Jung Il's power succession was carried out, and the power aspect by times has rapidly changed since 1994 when Kim Il Sung died and the terror crime became intense following the power combat between high-ranking officials and power conflict for right robbery. Now South Korea should install the specialized department which synthesizes and analyzes the information on North Korean high-ranking officials and reinforce the comprehensive information-collecting system through the protection and management of North Korean defectors and secret agents in order to determine the cause of North Korean terror crime and respond to it. And South Korea should participate positively in the international collaboration related to North Korean terror and make direct efforts to attract the international agreement to build the international cooperation for the response to North Korean terror crime. Also, we should try more to arrange the realistic countermeasure against North Korean cyber/electronic terror which was more diversified with the expertise terror escaping from existing forcible terror through enactment/revision of law related to cyber terror crime, organizing relevant institute and budget, training professional manpower, and technical development.

  • PDF

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

Usefulness of Pulsatile Flow Aortic Aneurysm Phantoms for Stent-graft Placement (스텐트그라프트 장치술을 위한 대동맥류 혈류 팬텀의 유용성)

  • Kim, Tae-Hyung;Ko, Gi-Young;Song, Ho-Young;Park, In-Kook;Shin, Ji-Hoon;Lim, Jin-Oh;Kim, Jin-Hyoung;Choi, Eu-Gene K.
    • Journal of radiological science and technology
    • /
    • v.30 no.3
    • /
    • pp.205-212
    • /
    • 2007
  • To evaluate the feasibility and efficacy of a pulsatile aortic aneurysm phantoms for in-vitro study. The phantoms consisted of a pulsating motor part(heart part) and an aortic aneurysm part, which mimicked true physiologic conditions. The heart part was created from a high-pressured water pump and a pulsatile flow solenoid valve for the simulation of aortic flow. The aortic aneurysm part was manufactured from paper clay, which was placed inside a acrylic plastic square box, where liquid silicone was poured. After the silicone was formed, the clay was removed, and a silicone tube was used to connect the heart and aneurysm part. We measured the change in pressure as related to the opening time(pulse rate, Kruskal-Wallis method) and pressure before and after the stent-graft implantation(n = 5, Wilcoxon's signed ranks test). The changes in blood pressures according to pulse rate were all statistically significant(p<0.05). The systolic/diastolic pressures at the proximal aorta, the aortic aneurysm, and the distal aorta of the model were $157.80{\pm}1.92/130.20{\pm}1.92$, $159.40{\pm}1.14/134.00{\pm}2.92$, and $147.20{\pm}1.480/129.60{\pm}2.70\;mmHg$, respectively, when the pulse rate was 0.5 beat/second. The pressures changed to $161.40{\pm}1.34/90.20{\pm}1.64$, $175.00{\pm}1.58/93.00{\pm}1.58$, and $176.80{\pm}1.48/90.80{\pm}1.92\;mmHg$, respectively, when the pulse rate was 1.0 beat/second, and $159.40{\pm}1.82/127.20{\pm}1.48$, $166.60{\pm}1.67/138.00{\pm}1.87$, and $161.00{\pm}1.22/135.40{\pm}1.67\;mmHg$, respectively, when it was 1.5 beat/second. When pulse rate was set at 1.0 beat/second, the pressures were $143.60{\pm}1.67/90.20{\pm}1.64$, $147.20{\pm}1.92/84.60{\pm}1.82$, and $137.40{\pm}1.52/88.80{\pm}1.64\;mmHg$ after stent-graft implantation. The changes of pressure before and after stent-graft implantation were statistically significant(p<0.05) except the diastolic pressures at the proximal(p =1.00) and distal aorta(p=0.157). The aortic aneurysm phantoms seems to be useful for the evaluation of the efficacy of stent-graft before animal or clinical studies because of its easy reproducibility and ability to display a wide range of pressures.

  • PDF

A Study on the Availability of the On-Board Imager(OBI) and Cone-Beam CT(CBCT) in the Verification of Patient Set-up (온보드 영상장치(On-Board Imager) 및 콘빔CT(CBCT)를 이용한 환자 자세 검증의 유용성에 대한 연구)

  • Bak, Jino;Park, Sung-Ho;Park, Suk-Won
    • Radiation Oncology Journal
    • /
    • v.26 no.2
    • /
    • pp.118-125
    • /
    • 2008
  • Purpose: On-line image guided radiation therapy(on-line IGRT) and(kV X-ray images or cone beam CT images) were obtained by an on-board imager(OBI) and cone beam CT(CBCT), respectively. The images were then compared with simulated images to evaluate the patient's setup and correct for deviations. The setup deviations between the simulated images(kV or CBCT images), were computed from 2D/2D match or 3D/3D match programs, respectively. We then investigated the correctness of the calculated deviations. Materials and Methods: After the simulation and treatment planning for the RANDO phantom, the phantom was positioned on the treatment table. The phantom setup process was performed with side wall lasers which standardized treatment setup of the phantom with the simulated images, after the establishment of tolerance limits for laser line thickness. After a known translation or rotation angle was applied to the phantom, the kV X-ray images and CBCT images were obtained. Next, 2D/2D match and 3D/3D match with simulation CT images were taken. Lastly, the results were analyzed for accuracy of positional correction. Results: In the case of the 2D/2D match using kV X-ray and simulation images, a setup correction within $0.06^{\circ}$ for rotation only, 1.8 mm for translation only, and 2.1 mm and $0.3^{\circ}$ for both rotation and translation, respectively, was possible. As for the 3D/3D match using CBCT images, a correction within $0.03^{\circ}$ for rotation only, 0.16 mm for translation only, and 1.5 mm for translation and $0.0^{\circ}$ for rotation, respectively, was possible. Conclusion: The use of OBI or CBCT for the on-line IGRT provides the ability to exactly reproduce the simulated images in the setup of a patient in the treatment room. The fast detection and correction of a patient's positional error is possible in two dimensions via kV X-ray images from OBI and in three dimensions via CBCT with a higher accuracy. Consequently, the on-line IGRT represents a promising and reliable treatment procedure.

Correlation between High-Resolution CT and Pulmonary Function Tests in Patients with Emphysema (폐기종환자에서 고해상도 CT와 폐기능검사와의 상관관계)

  • Ahn, Joong-Hyun;Park, Jeong-Mee;Ko, Seung-Hyeon;Yoon, Jong-Goo;Kwon, Soon-Seug;Kim, Young-Kyoon;Kim, Kwan-Hyoung;Moon, Hwa-Sik;Park, Sung-Hak;Song, Jeong-Sup
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.3
    • /
    • pp.367-376
    • /
    • 1996
  • Background : The diagnosis of emphysema during life is based on a combination of clinical, functional, and radiographic findings, but this combination is relatively insensitive and nonspecific. The development of rapid, high-resolution third and fourth generation CT scanners has enabled us to resolve pulmonary parenchymal abnormalities with great precision. We compared the chest HRCT findings to the pulmonary function test and arterial blood gas analysis in pulmonary emphysema patients to test the ability of HRCT to quantify the degree of pulmonary emphysema. Methods : From october 1994 to october 1995, the study group consisted of 20 subjects in whom HRCT of the thorax and pulmonary function studies had been obtained at St. Mary's hospital. The analysis was from scans at preselected anatomic levels and incorporated both lungs. On each HRCT slice the lung parenchyma was assessed for two aspects of emphysema: severity and extent. The five levels were graded and scored separately for the left and right lung giving a total of 10 lung fields. A combination of severity and extent gave the degree of emphysema. We compared the HRCT quantitation of emphysema, pulmonary function tests, ABGA, CBC, and patients characteristics(age, sex, height, weight, smoking amounts etc.) in 20 patients. Results : 1) There was a significant inverse correlation between HRCT scores for emphysema and percentage predicted values of DLco(r = -0.68, p < 0.05), DLco/VA(r = -0.49, p < 0.05), FEV1(r = -0.53, p < 0.05), and FVC(r = -0.47, p < 0.05). 2) There was a significant correlation between the HRCT scores and percentage predicted values of TLC(r = 0.50, p < 0.05), RV(r = 0.64, p < 0.05). 3) There was a significant inverse correlation between the HRCT scores and PaO2(r = -0.48, p < 0.05) and significant correlation with D(A-a)O2(r = -0.48, p < 0.05) but no significant correlation between the HRCT scores and PaCO2. 4) There was no significant correlation between the HRCT scores and age, sex, height, weight, smoking amounts in patients, hemoglobin, hematocrit, and wbc counts. Conclusion : High-Resolution CT provides a useful method for early detection and quantitating emphysema in life and correlates significantly with pulmonary function tests and arterial blood gas analysis.

  • PDF

A Study on Industries's Leading at the Stock Market in Korea - Gradual Diffusion of Information and Cross-Asset Return Predictability- (산업의 주식시장 선행성에 관한 실증분석 - 자산간 수익률 예측 가능성 -)

  • Kim Jong-Kwon
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2004.11a
    • /
    • pp.355-380
    • /
    • 2004
  • I test the hypothesis that the gradual diffusion of information across asset markets leads to cross-asset return predictability in Korea. Using thirty-six industry portfolios and the broad market index as our test assets, I establish several key results. First, a number of industries such as semiconductor, electronics, metal, and petroleum lead the stock market by up to one month. In contrast, the market, which is widely followed, only leads a few industries. Importantly, an industry's ability to lead the market is correlated with its propensity to forecast various indicators of economic activity such as industrial production growth. Consistent with our hypothesis, these findings indicate that the market reacts with a delay to information in industry returns about its fundamentals because information diffuses only gradually across asset markets. Traditional theories of asset pricing assume that investors have unlimited information-processing capacity. However, this assumption does not hold for many traders, even the most sophisticated ones. Many economists recognize that investors are better characterized as being only boundedly rational(see Shiller(2000), Sims(2201)). Even from casual observation, few traders can pay attention to all sources of information much less understand their impact on the prices of assets that they trade. Indeed, a large literature in psychology documents the extent to which even attention is a precious cognitive resource(see, eg., Kahneman(1973), Nisbett and Ross(1980), Fiske and Taylor(1991)). A number of papers have explored the implications of limited information- processing capacity for asset prices. I will review this literature in Section II. For instance, Merton(1987) develops a static model of multiple stocks in which investors only have information about a limited number of stocks and only trade those that they have information about. Related models of limited market participation include brennan(1975) and Allen and Gale(1994). As a result, stocks that are less recognized by investors have a smaller investor base(neglected stocks) and trade at a greater discount because of limited risk sharing. More recently, Hong and Stein(1999) develop a dynamic model of a single asset in which information gradually diffuses across the investment public and investors are unable to perform the rational expectations trick of extracting information from prices. Hong and Stein(1999). My hypothesis is that the gradual diffusion of information across asset markets leads to cross-asset return predictability. This hypothesis relies on two key assumptions. The first is that valuable information that originates in one asset reaches investors in other markets only with a lag, i.e. news travels slowly across markets. The second assumption is that because of limited information-processing capacity, many (though not necessarily all) investors may not pay attention or be able to extract the information from the asset prices of markets that they do not participate in. These two assumptions taken together leads to cross-asset return predictability. My hypothesis would appear to be a very plausible one for a few reasons. To begin with, as pointed out by Merton(1987) and the subsequent literature on segmented markets and limited market participation, few investors trade all assets. Put another way, limited participation is a pervasive feature of financial markets. Indeed, even among equity money managers, there is specialization along industries such as sector or market timing funds. Some reasons for this limited market participation include tax, regulatory or liquidity constraints. More plausibly, investors have to specialize because they have their hands full trying to understand the markets that they do participate in

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.