• Title/Summary/Keyword: Language functions

Search Result 648, Processing Time 0.028 seconds

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Performance of Investment Strategy using Investor-specific Transaction Information and Machine Learning (투자자별 거래정보와 머신러닝을 활용한 투자전략의 성과)

  • Kim, Kyung Mock;Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.65-82
    • /
    • 2021
  • Stock market investors are generally split into foreign investors, institutional investors, and individual investors. Compared to individual investor groups, professional investor groups such as foreign investors have an advantage in information and financial power and, as a result, foreign investors are known to show good investment performance among market participants. The purpose of this study is to propose an investment strategy that combines investor-specific transaction information and machine learning, and to analyze the portfolio investment performance of the proposed model using actual stock price and investor-specific transaction data. The Korea Exchange offers daily information on the volume of purchase and sale of each investor to securities firms. We developed a data collection program in C# programming language using an API provided by Daishin Securities Cybosplus, and collected 151 out of 200 KOSPI stocks with daily opening price, closing price and investor-specific net purchase data from January 2, 2007 to July 31, 2017. The self-organizing map model is an artificial neural network that performs clustering by unsupervised learning and has been introduced by Teuvo Kohonen since 1984. We implement competition among intra-surface artificial neurons, and all connections are non-recursive artificial neural networks that go from bottom to top. It can also be expanded to multiple layers, although many fault layers are commonly used. Linear functions are used by active functions of artificial nerve cells, and learning rules use Instar rules as well as general competitive learning. The core of the backpropagation model is the model that performs classification by supervised learning as an artificial neural network. We grouped and transformed investor-specific transaction volume data to learn backpropagation models through the self-organizing map model of artificial neural networks. As a result of the estimation of verification data through training, the portfolios were rebalanced monthly. For performance analysis, a passive portfolio was designated and the KOSPI 200 and KOSPI index returns for proxies on market returns were also obtained. Performance analysis was conducted using the equally-weighted portfolio return, compound interest rate, annual return, Maximum Draw Down, standard deviation, and Sharpe Ratio. Buy and hold returns of the top 10 market capitalization stocks are designated as a benchmark. Buy and hold strategy is the best strategy under the efficient market hypothesis. The prediction rate of learning data using backpropagation model was significantly high at 96.61%, while the prediction rate of verification data was also relatively high in the results of the 57.1% verification data. The performance evaluation of self-organizing map grouping can be determined as a result of a backpropagation model. This is because if the grouping results of the self-organizing map model had been poor, the learning results of the backpropagation model would have been poor. In this way, the performance assessment of machine learning is judged to be better learned than previous studies. Our portfolio doubled the return on the benchmark and performed better than the market returns on the KOSPI and KOSPI 200 indexes. In contrast to the benchmark, the MDD and standard deviation for portfolio risk indicators also showed better results. The Sharpe Ratio performed higher than benchmarks and stock market indexes. Through this, we presented the direction of portfolio composition program using machine learning and investor-specific transaction information and showed that it can be used to develop programs for real stock investment. The return is the result of monthly portfolio composition and asset rebalancing to the same proportion. Better outcomes are predicted when forming a monthly portfolio if the system is enforced by rebalancing the suggested stocks continuously without selling and re-buying it. Therefore, real transactions appear to be relevant.

Smart farm development strategy suitable for domestic situation -Focusing on ICT technical characteristics for the development of the industry6.0- (국내 실정에 적합한 스마트팜 개발 전략 -6차산업의 발전을 위한 ICT 기술적 특성을 중심으로-)

  • Han, Sang-Ho;Joo, Hyung-Kun
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.147-157
    • /
    • 2022
  • This study tried to propose a smart farm technology strategy suitable for the domestic situation, focusing on the differentiation suitable for the domestic situation of ICT technology. In the case of advanced countries in the overseas agricultural industry, it was confirmed that they focused on the development of a specific stage that reflected the geographical characteristics of each country, the characteristics of the agricultural industry, and the characteristics of the people's demand. Confirmed that no enemy development is being performed. Therefore, in response to problems such as a rapid decrease in the domestic rural population, aging population, loss of agricultural price competitiveness, increase in fallow land, and decrease in use rate of arable land, this study aims to develop smart farm ICT technology in the future to create quality agricultural products and have price competitiveness. It was suggested that the smart farm should be promoted by paying attention to the excellent performance, ease of use due to the aging of the labor force, and economic feasibility suitable for a small business scale. First, in terms of economic feasibility, the ICT technology is configured by selecting only the functions necessary for the small farm household (primary) business environment, and the smooth communication system with these is applied to the ICT technology to gradually update the functions required by the actual farmhouse. suggested that it may contribute to the reduction. Second, in terms of performance, it is suggested that the operation accuracy can be increased if attention is paid to improving the communication function of ICT, such as adjusting the difficulty of big data suitable for the aging population in Korea, using a language suitable for them, and setting an algorithm that reflects their prediction tendencies. Third, the level of ease of use. Smart farms based on ICT technology for the development of the Industry6.0 (1.0(Agriculture, Forestry) + 2.0(Agricultural and Water & Water Processing) + 3.0 (Service, Rural Experience, SCM)) perform operations according to specific commands, finally suggested that ease of use can be promoted by presetting and standardizing devices based on big data configuration customized for each regional environment.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

A Study on Methods for the Visualization of Stage Space through Stage Lighting (무대조명을 통한 무용 예술의 무대공간 시각화 방안 연구)

  • Lee, Jang-Weon;Yi, Chin-Woo
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.23 no.4
    • /
    • pp.16-28
    • /
    • 2009
  • Stage art basically builds upon the essence of "seeing," and at the same time, possesses relativity in showing and seeing. Stage lighting uses artificial light to solve the essence of "seeing", which is the foundation of stage art, and coming into the modern age, its role has been enhanced to an important medium for visual expression in stage art, due to the lighting tools that developed at a rapid pace along with the discovery of electricity, as well as the development of optics. Therefore, not only does lighting use a medium known as light in a field of stage art that gives mental and emotional inspiration to the audience, and aesthetically expresses time and space. In other words, stage lighting is a complex function of light engineering (technology and science) and aesthetic sense (feeling and art). This study aims to do research on methods for the visualization of stage space through lighting, mainly focused on dancing. I have studied the basics of stage lighting, its relations with other fields of stage art, and the functions and characteristics of lighting. Results show that lighting could be used to maximize the visualization of dancing and emphasizing the artistic growth of lighting and its ability to aesthetically express and I came to the following conclusions. First, lighting uses the forms and directions of light that various tools are able to produce in order to visualize the space on stage, and can maximally express the image that the work seeks. Second, it is possible to use lighting, through the movement of light, as a visual representation of the configuration of space in dancing works. Third, through the expression of visual and spatial aspects created by light, the work's dramatic catharsis can bring out mental and emotional feelings form the audience. Fourth, lighting can be seen not as a supporting role, but as an original visual design. To conclude, in order for lighting to be freed form the simple function of "lighting up the stage," which a majority of people think is common knowledge, and grow as one area in art, lighting designers must understand the intentions of the choreographer and the work with creativity and artistry they must consider light and color as an aesthetic language in order to heighten the effects of the work and allow it to partake as one element of work creation, so that lighting will be treated as a form of art.

Factors Influencing on the Cognitive Function in Type 2 Diabetics (2형 당뇨병 환자의 인지 기능에 영향 미치는 인자)

  • Goh, Dong Hwan;Cheon, Jin Sook;Choi, Young Sik;Kim, Ho Chan;Oh, Byoung Hoon
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.26 no.1
    • /
    • pp.59-67
    • /
    • 2018
  • Objectives : The aims of this study were to know the frequency and the nature of cognitive dysfunction in type 2 diabetics, and to reveal influencing variables on it. Methods : From eighty type 2 diabetics (42 males and 38 females), demographic and clinical data were obtained by structured interviews. Cognitive functions were measured using the MMSE-K (Korean Version of the Mini-Mental State Examination) and the Korean Version of the Montreal Cognitive Assessment (MoCA-K) tests. Severity of depression was evaluated by the Korean Version of the Hamilton Depression Rating Scale (K-HDRS). Results : 1) Among eighty type 2 diabetics, 13.75% were below 24 on the MMSE-K, while 38.8% were below 22 on the MoCA-K. 2) The total scores and subtest scores of the MoCA-K including visuospatial/ executive, attention, language, delayed recall and orientation were significantly lower in type 2 diabetics with cognitive dysfunction (N=31) than those without cognitive dysfunction (N=49) (p<0.001, respectively). 3) There were significant difference between type 2 diabetics with and those without cognitive dysfunction in age, education, economic status, body mass index, duration of diabetes, total scores of the K-HDRS, the MMSE-K and the MoCA-K (p<0.05, respectively). 4) The total scores of the MoCA-K had significant correlation with age, education, body mass index, family history of diabetes, duration of diabetes, total scores of the K-HDRS (p<0.05, respectively). 5) The risks of cognitive dysfunction in type 2 diabetics were significantly influenced by sex, education, fasting plasma glucose and depression. Conclusions : The cognitive dysfunction in type 2 diabetics seemed to be related to multiple factors. Therefore, more comprehensive biopsychosocial approaches needed for diagnosis and management of type 2 diabetes.

Existent, but Non-existent Spaces for Others Focusing on Discourse-spaces of a Korean Movie (2016) (존재하지만 존재 않는 타자들의 공간 영화 <죽여주는 여자>의 담론 공간을 중심으로)

  • Jang, Eun Mi;Han, Hee Jeong
    • Korean journal of communication and information
    • /
    • v.84
    • /
    • pp.99-123
    • /
    • 2017
  • We analyzed the movie (2016/ directed by J-yong E), which is entangled in politics of gender, age, class, or sexuality, naming as "spaces of Others", using the concepts of heterotopia of Foucault. Foucault addressed three types of spaces: the realistic space where we currently live, the unrealistic and non-existent utopia, and heterotopia, which functions antithetically to reality. Thus, Foucault's heterotopia can be considered to indicate "heterogeneous spaces" in reality. The Bacchus Lady revolves a 65-year old prostitute So-Young, sells her body to old men at the parks in downtown of Seoul. Old prostitute on streets are often referred as "Bacchus Ladies", because suggest the popular energy drink a bottle of Bacchus while selling sex. The movie represents some minorities such as transgender, Tina and madam of the club, G-spot, migrant women like Camila and Aindu, and a amputee, Dohoon. Through these people's bodies, the problems such as imperials, nations, ethnics, gender, age, class are entangled in the movie. The politics of these points work and construct heterotopias in four spaces of Others. First, the spaces which ageing and death are intersected. Second, the spaces of So-Young for prostitutes, Third, the spaces of So-Young's mothering: she adopted her baby to American when he was a infant, so she have felt guilty. Fourth, the spaces for So-young's quasi-family with Minho, a Kopian boy who was abandoned by Korean father, Dohoon, who is a poor amputee, and Tina, who is a transgender singer. Fifth, the spaces of speech of So-Young as the subaltern: the subaltern does not have the language to express its own experiences. In order to listen to the words of subaltern, we must do the task of measuring the silence. This cinematic representation of So-young as the subaltern makes her speak about her situation. Finally, the spaces constructed by the movie can be connected 'heterotopia of crisis', 'heterotopia of deviation' and 'heterotopia of fantasy'. The spaces of the movie represents lives of Others, nevertheless, So-Young's Otherness through spaces of heterotopia was transformed to an absolute Other by patriarchal traits of cinematic narrative.

  • PDF

Correlation between the Seoul Neuropsychological Screening Battery of the Parkinson's Disease Patient with Mild Cognitive Impairment and Change of the Cerebral Ventricle Volume in the Brain MRI (경도인지장애를 동반한 파킨슨병 환자의 서울신경심리검사와 뇌 자기공명영상에서 뇌실 체적 변화에 대한 상관관계)

  • Lee, Hyunyong;Kim, Hyeonjin;Im, Inchul;Lee, Jaeseung
    • Journal of the Korean Society of Radiology
    • /
    • v.8 no.5
    • /
    • pp.231-240
    • /
    • 2014
  • The purpose of this study were to analyze that the Seoul neuropsychological screening battery (SNSB) for the evaluating cognitive assessment of the Parkinson's disease patients with mild cognitive impairment (PD-MCI) and the changes of the cerebral ventricle volume in the brain magnetic resonance imaging (MRI), and we has been bring forward the guideline to determine the diagnostic criteria for the PD-MCI. To achieve this, we was diagnosed with Parkinson's disease patients (PD-MCI group: 34 patients; Parkinson's disease with normal cognition, PD-NC group: 34 patients) to perform the SNSB test for the attention, language, memory, visuospatial, and frontal/executive functions and the brain MRI. Additionally, to compared the change of the cerebral ventricle volume, we performed the brain MRI for the 32 normal control (NC) group. The volumetric analysis for a specific cerebral ventricle performed by using Freesurfer Ver. 5.1 (Massachusetts general Hospital, Boston MA, USA). As a results, compared to the PD-NC group, the PD-MCI group were statistically significant reduction in the ability to perform the memory and the visuospatial function (p<0.05). The volumetric changes for a specific cerebral ventricle were statistically significant variation in the left and right lateral ventricle, left and right inferior lateral ventricle, and 3rd ventricle. Although, in order to compared the objectification, the normalized percentage applied to the volumetric changes showed to extend the PD-MCI group than the PD-NC group. Specially, the left and right ventricle extension for the PD-MCI patients conspicuously had showed a quantitative linear relationship between the memory and the visuospatial function for the SNSB (r>0.5, p<0.05). Therefore, we were able to judge the diagnostic criteria of the PD-MCI through that can observe the volumetric variation of the specific cerebral ventricle by using Freesurfer in brain MRI, and to analyze the correlation between the SNSB.

The Construction of QoS Integration Platform for Real-time Negotiation and Adaptation Stream Service in Distributed Object Computing Environments (분산 객체 컴퓨팅 환경에서 실시간 협약 및 적응 스트림 서비스를 위한 QoS 통합 플랫폼의 구축)

  • Jun, Byung-Taek;Kim, Myung-Hee;Joo, Su-Chong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11S
    • /
    • pp.3651-3667
    • /
    • 2000
  • Recently, in the distributed multimedia environments based on internet, as radical growing technologies, the most of researchers focus on both streaming technology and distributed object thchnology, Specially, the studies which are tried to integrate the streaming services on the distributed object technology have been progressing. These technologies are applied to various stream service mamgements and protocols. However, the stream service management mexlels which are being proposed by the existing researches are insufficient for suporting the QoS of stream services. Besides, the existing models have the problems that cannot support the extensibility and the reusability, when the QoS-reiatedfunctions are being developed as a sub-module which is suited on the specific-purpose application services. For solving these problems, in this paper. we suggested a QoS Integrated platform which can extend and reuse using the distributed object technologies, and guarantee the QoS of the stream services. A structure of platform we suggested consists of three components such as User Control Module(UCM), QoS Management Module(QoSM) and Stream Object. Stream Object has Send/Receive operations for transmitting the RTP packets over TCP/IP. User Control ModuleI(UCM) controls Stream Objects via the COREA service objects. QoS Management Modulel(QoSM) has the functions which maintain the QoS of stream service between the UCMs in client and server. As QoS control methexlologies, procedures of resource monitoring, negotiation, and resource adaptation are executed via the interactions among these comiXments mentioned above. For constmcting this QoS integrated platform, we first implemented the modules mentioned above independently, and then, used IDL for defining interfaces among these mexlules so that can support platform independence, interoperability and portability base on COREA. This platform is constructed using OrbixWeb 3.1c following CORBA specification on Solaris 2.5/2.7, Java language, Java, Java Media Framework API 2.0, Mini-SQL1.0.16 and multimedia equipments. As results for verifying this platform functionally, we showed executing results of each module we mentioned above, and a numerical data obtained from QoS control procedures on client and server's GUI, while stream service is executing on our platform.

  • PDF