• Title/Summary/Keyword: new memory

Search Result 1,693, Processing Time 0.03 seconds

Added Value of Chemical Exchange-Dependent Saturation Transfer MRI for the Diagnosis of Dementia

  • Jang-Hoon Oh;Bo Guem Choi;Hak Young Rhee;Jin San Lee;Kyung Mi Lee;Soonchan Park;Ah Rang Cho;Chang-Woo Ryu;Key Chung Park;Eui Jong Kim;Geon-Ho Jahng
    • Korean Journal of Radiology
    • /
    • v.22 no.5
    • /
    • pp.770-781
    • /
    • 2021
  • Objective: Chemical exchange-dependent saturation transfer (CEST) MRI is sensitive for detecting solid-like proteins and may detect changes in the levels of mobile proteins and peptides in tissues. The objective of this study was to evaluate the characteristics of chemical exchange proton pools using the CEST MRI technique in patients with dementia. Materials and Methods: Our institutional review board approved this cross-sectional prospective study and informed consent was obtained from all participants. This study included 41 subjects (19 with dementia and 22 without dementia). Complete CEST data of the brain were obtained using a three-dimensional gradient and spin-echo sequence to map CEST indices, such as amide, amine, hydroxyl, and magnetization transfer ratio asymmetry (MTRasym) values, using six-pool Lorentzian fitting. Statistical analyses of CEST indices were performed to evaluate group comparisons, their correlations with gray matter volume (GMV) and Mini-Mental State Examination (MMSE) scores, and receiver operating characteristic (ROC) curves. Results: Amine signals (0.029 for non-dementia, 0.046 for dementia, p = 0.011 at hippocampus) and MTRasym values at 3 ppm (0.748 for non-dementia, 1.138 for dementia, p = 0.022 at hippocampus), and 3.5 ppm (0.463 for non-dementia, 0.875 for dementia, p = 0.029 at hippocampus) were significantly higher in the dementia group than in the non-dementia group. Most CEST indices were not significantly correlated with GMV; however, except amide, most indices were significantly correlated with the MMSE scores. The classification power of most CEST indices was lower than that of GMV but adding one of the CEST indices in GMV improved the classification between the subject groups. The largest improvement was seen in the MTRasym values at 2 ppm in the anterior cingulate (area under the ROC curve = 0.981), with a sensitivity of 100 and a specificity of 90.91. Conclusion: CEST MRI potentially allows noninvasive image alterations in the Alzheimer's disease brain without injecting isotopes for monitoring different disease states and may provide a new imaging biomarker in the future.

Design of Poly-Fuse OTP IP Using Multibit Cells (Multibit 셀을 이용한 Poly-Fuse OTP IP 설계)

  • Dongseob kim;Longhua Li;Panbong Ha;Younghee Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.266-274
    • /
    • 2024
  • In this paper, we designed a low-area 32-bit PF (Poly-fuse) OTP IP, a non-volatile memory that stores data required for analog circuit trimming and calibration. Since one OTP cell is constructed using two PFs in one select transistor, a 1cell-2bit multibit PF OTP cell that can program 2bits of data is proposed. The bitcell size of the proposed 1cell-2bit PF OTP cell is 1/2 of 12.69㎛ × 3.48㎛ (=44.161㎛2), reducing the cell area by 33% compared to that of the existing PF OTP cell. In addition, in this paper, a new 1 row × 32 column cell array circuit and core circuit (WL driving circuit, BL driving circuit, BL switch circuit, and DL sense amplifier circuit) are proposed to meet the operation of the proposed multbit cell. The layout size of the 32bit OTP IP using the proposed multibit cell is 238.47㎛ × 156.52㎛ (=0.0373㎛2) is reduced by about 33% compared that of the existing 32bit PF OTP IP using a single bitcell, which is 386.87㎛ × 144.87㎛ (=0.056㎛2). The 32-bit PF OTP IP, designed with 10 years of data retention time in mind, is designed with a minimum programmed PF sensing resistance of 10.5㏀ in the detection read mode and of 5.3 ㏀ in the read mode, respectively, as a result of post-layout simulation of the test chip.

A STUDY ON THE TEMPERATURE CHANGES OF BONE TISSUES DURING IMPLANT SITE PREPARATION (임플랜트 식립부위 형성시 골조직의 온도변화에 관한 연구)

  • Kim Pyung-Il;Kim Yung-Soo;Jang Kyung-Soo;Kim Chang-Whe
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.40 no.1
    • /
    • pp.1-17
    • /
    • 2002
  • The purpose of this study is to examine the possibility of thermal injury to bone tissues during an implant site preparation under the same condition as a typical clinical practice of $Br{\aa}nemark$ implant system. All the burs for $Br{\aa}nemark$ implant system were studied except the round bur The experiments involved 880 drilling cases : 50 cases for each of the 5 steps of NP, 5 steps of RP, and 7 steps of WP, all including srew tap, and 30 cases of 2mm twist drill. For precision drilling, a precision handpiece restraining system was developed (Eungyong Machinery Co., Korea). The system kept the drill parallel to the drilling path and allowed horizontal adjustment of the drill with as little as $1{\mu}m$ increment. The thermocouple insertion hole. that is 0.9mm in diameter and 8mm in depth, was prepared 0.2mm away from the tapping bur the last drilling step. The temperatures due to countersink, pilot drill, and other drills were measured at the surface of the bone, at the depths of 4mm and 8mm respectively. Countersink drilling temperature was measured by attaching the tip of a thermocouple at the rim of the countersink. To assure temperature measurement at the desired depths, 'bent-thermocouples' with their tips of 4 and 8mm bent at $120^{\circ}$ were used. The profiles of temperature variation were recorded continuously at one second interval using a thermometer with memory function (Fluke Co. U.S.A.) and 0.7mm thermocouples (Omega Co., U.S.A.). To simulate typical clinical conditions, 35mm square samples of bovine scapular bone were utilized. The samples were approximately 20mm thick with the cortical thickness on the drilling side ranging from 1 to 2mm. A sample was placed in a container of saline solution so that its lower half is submerged into the solution and the upper half exposed to the room air, which averaged $24.9^{\circ}C$. The temperature of the saline solution was maintained at $36.5^{\circ}C$ using an electric heater (J. O Tech Co., Korea). This experimental condition was similar to that of a patient s opened mouth. The study revealed that a 2mm twist drill required greatest attention. As a guide drill, a twist drill is required to bore through a 'virgin bone,' rather than merely enlarging an already drilled hole as is the case with other drills. This typically generates greater amount of heat. Furthermore, one tends to apply a greater pressure to overcome drilling difficulty, thus producing even greater amount heat. 150 experiments were conducted for 2mm twist drill. For 140 cases, drill pressure of 750g was sufficient, and 10 cases required additional 500 or 100g of drilling pressure. In case of the former. 3 of the 140 cases produced the temperature greater than $47^{\circ}C$, the threshold temperature of degeneration of bone tissue (1983. Eriksson et al.) which is also the reference temperature in this study. In each of the 10 cases requiring extra pressure, the temperature exceeded the reference temperature. More significantly, a surge of heat was observed in each of these cases This observations led to addtional 20 drilling experiments on dense bones. For 10 of these cases, the pressure of 1,250g was applied. For the other 10, 1.750g were applied. In each of these cases, it was also observed that the temperature rose abruptly far above the thresh old temperature of $47^{\circ}C$, sometimes even to 70 or $80^{\circ}C$. It was also observed that the increased drilling pressure influenced the shortening of drilling time more than the rise of drilling temperature. This suggests the desirability of clinically reconsidering application of extra pressures to prevent possible injury to bone tissues. An analysis of these two extra pressure groups of 1,250g and 1,750g revealed that the t-statistics for reduced amount of drilling time due to extra pressure and increased peak temperature due to the same were 10.80 and 2.08 respectively suggesting that drilling time was more influenced than temperature. All the subsequent drillings after the drilling with a 2mm twist drill did not produce excessive heat, i.e. the heat generation is at the same or below the body temperature level. Some of screw tap, pilot, and countersink showed negative correlation coefficients between the generated heat and the drilling time. indicating the more the drilling time, the lower the temperature. The study also revealed that the drilling time was increased as a function of frequency of the use of the drill. Under the drilling pressure of 750g, it was revealed that the drilling time for an old twist drill that has already drilled 40 times was 4.5 times longer than a new drill The measurement was taken for the first 10 drillings of a new drill and 10 drillings of an old drill that has already been used for 40 drillings. 'Test Statistics' of small samples t-test was 3.49, confirming that the used twist drills require longer drilling time than new ones. On the other hand, it was revealed that there was no significant difference in drilling temperature between the new drill and the old twist drill. Finally, the following conclusions were reached from this study : 1 Used drilling bur causes almost no change in drilling temperature but increase in drilling time through 50 drillings under the manufacturer-recommended cooling conditions and the drilling pressure of 750g. 2. The heat that is generated through drilling mattered only in the case of 2mm twist drills, the first drill to be used in bone drilling process for all the other drills there is no significant problem. 3. If the drilling pressure is increased when a 2mm twist drill reaches a dense bone, the temperature rises abruptly even under the manufacturer-recommended cooling conditions. 4. Drilling heat was the highest at the final moment of the drilling process.

The Internal Representations of (1973) as seen through Walter Benjamin's Dialectical Images (프랭크 무리스의 콜라주 애니메이션 <프랭크 필름>(1973)에 나타난 내적 표현 : 발터 벤야민의 변증법적 이미지를 중심으로)

  • Kim, Young-Ok;Moon, Jae-Cheol
    • Cartoon and Animation Studies
    • /
    • s.38
    • /
    • pp.53-70
    • /
    • 2015
  • In industrialized societies throughout the 19th and 20th centuries, Over Produced and Mass consumption images were constantly shown to people via Mass-Media as means to provoke one's desire. Frank Mouris, the American independent animator, captured and showed the infinite nesting of industrialized image with his autobiographical story through his work (1973) and made it as an intense visual flow. This innovative art animation has broke the traditional form of narrative animation and won the Annecy Animation Festival Grand Prix and the Academy Awards in 1974. This was also selected for preservation in the United States National Film Registry by the Library of Congress as being culturally, historically, or aesthetically significant in 1996. This study explores and shows that how these a-half million images to express Franks Mouris's autobiographical story in could be analyzed by the concept of Walter Benjamin's 'dialectical images'. Typically, the term 'dialectic' need to be formed by contradiction or opposite concept in the basic principles, but a dialectical image of Benjamin could be formed without any opposite concept while maintaining the uniqueness of each new relationship of the past. Benjamin's dialectical images are no longer stay in the historical past, It always meets with the present when someone realizes the past in the present moment. I suggest three different aspect according to Benjamin's point of view to analyse this animated film such as 'Historical-dialectical imaging of private/collective memory', 'Reconfiguring of present through analysing the relationship between the image flows and its own time/space', and 'Old future over the existing fragment and the presence of fragment. has the great value not only to present the experimental and innovative aesthetics of animated film, but also to show an analysis of contemporary culture and social aspect in mid-20th century. This study is to explore the diversity of animation representation, aesthetics, and also to suggest a new aspect of animation studies.

The Adoption and Diffusion of Semantic Web Technology Innovation: Qualitative Research Approach (시맨틱 웹 기술혁신의 채택과 확산: 질적연구접근법)

  • Joo, Jae-Hun
    • Asia pacific journal of information systems
    • /
    • v.19 no.1
    • /
    • pp.33-62
    • /
    • 2009
  • Internet computing is a disruptive IT innovation. Semantic Web can be considered as an IT innovation because the Semantic Web technology possesses the potential to reduce information overload and enable semantic integration, using capabilities such as semantics and machine-processability. How should organizations adopt the Semantic Web? What factors affect the adoption and diffusion of Semantic Web innovation? Most studies on adoption and diffusion of innovation use empirical analysis as a quantitative research methodology in the post-implementation stage. There is criticism that the positivist requiring theoretical rigor can sacrifice relevance to practice. Rapid advances in technology require studies relevant to practice. In particular, it is realistically impossible to conduct quantitative approach for factors affecting adoption of the Semantic Web because the Semantic Web is in its infancy. However, in an early stage of introduction of the Semantic Web, it is necessary to give a model and some guidelines and for adoption and diffusion of the technology innovation to practitioners and researchers. Thus, the purpose of this study is to present a model of adoption and diffusion of the Semantic Web and to offer propositions as guidelines for successful adoption through a qualitative research method including multiple case studies and in-depth interviews. The researcher conducted interviews with 15 people based on face-to face and 2 interviews by telephone and e-mail to collect data to saturate the categories. Nine interviews including 2 telephone interviews were from nine user organizations adopting the technology innovation and the others were from three supply organizations. Semi-structured interviews were used to collect data. The interviews were recorded on digital voice recorder memory and subsequently transcribed verbatim. 196 pages of transcripts were obtained from about 12 hours interviews. Triangulation of evidence was achieved by examining each organization website and various documents, such as brochures and white papers. The researcher read the transcripts several times and underlined core words, phrases, or sentences. Then, data analysis used the procedure of open coding, in which the researcher forms initial categories of information about the phenomenon being studied by segmenting information. QSR NVivo version 8.0 was used to categorize sentences including similar concepts. 47 categories derived from interview data were grouped into 21 categories from which six factors were named. Five factors affecting adoption of the Semantic Web were identified. The first factor is demand pull including requirements for improving search and integration services of the existing systems and for creating new services. Second, environmental conduciveness, reference models, uncertainty, technology maturity, potential business value, government sponsorship programs, promising prospects for technology demand, complexity and trialability affect the adoption of the Semantic Web from the perspective of technology push. Third, absorptive capacity is an important role of the adoption. Fourth, suppler's competence includes communication with and training for users, and absorptive capacity of supply organization. Fifth, over-expectance which results in the gap between user's expectation level and perceived benefits has a negative impact on the adoption of the Semantic Web. Finally, the factor including critical mass of ontology, budget. visible effects is identified as a determinant affecting routinization and infusion. The researcher suggested a model of adoption and diffusion of the Semantic Web, representing relationships between six factors and adoption/diffusion as dependent variables. Six propositions are derived from the adoption/diffusion model to offer some guidelines to practitioners and a research model to further studies. Proposition 1 : Demand pull has an influence on the adoption of the Semantic Web. Proposition 1-1 : The stronger the degree of requirements for improving existing services, the more successfully the Semantic Web is adopted. Proposition 1-2 : The stronger the degree of requirements for new services, the more successfully the Semantic Web is adopted. Proposition 2 : Technology push has an influence on the adoption of the Semantic Web. Proposition 2-1 : From the perceptive of user organizations, the technology push forces such as environmental conduciveness, reference models, potential business value, and government sponsorship programs have a positive impact on the adoption of the Semantic Web while uncertainty and lower technology maturity have a negative impact on its adoption. Proposition 2-2 : From the perceptive of suppliers, the technology push forces such as environmental conduciveness, reference models, potential business value, government sponsorship programs, and promising prospects for technology demand have a positive impact on the adoption of the Semantic Web while uncertainty, lower technology maturity, complexity and lower trialability have a negative impact on its adoption. Proposition 3 : The absorptive capacities such as organizational formal support systems, officer's or manager's competency analyzing technology characteristics, their passion or willingness, and top management support are positively associated with successful adoption of the Semantic Web innovation from the perceptive of user organizations. Proposition 4 : Supplier's competence has a positive impact on the absorptive capacities of user organizations and technology push forces. Proposition 5 : The greater the gap of expectation between users and suppliers, the later the Semantic Web is adopted. Proposition 6 : The post-adoption activities such as budget allocation, reaching critical mass, and sharing ontology to offer sustainable services are positively associated with successful routinization and infusion of the Semantic Web innovation from the perceptive of user organizations.

X-tree Diff: An Efficient Change Detection Algorithm for Tree-structured Data (X-tree Diff: 트리 기반 데이터를 위한 효율적인 변화 탐지 알고리즘)

  • Lee, Suk-Kyoon;Kim, Dong-Ah
    • The KIPS Transactions:PartC
    • /
    • v.10C no.6
    • /
    • pp.683-694
    • /
    • 2003
  • We present X-tree Diff, a change detection algorithm for tree-structured data. Our work is motivated by need to monitor massive volume of web documents and detect suspicious changes, called defacement attack on web sites. From this context, our algorithm should be very efficient in speed and use of memory space. X-tree Diff uses a special ordered labeled tree, X-tree, to represent XML/HTML documents. X-tree nodes have a special field, tMD, which stores a 128-bit hash value representing the structure and data of subtrees, so match identical subtrees form the old and new versions. During this process, X-tree Diff uses the Rule of Delaying Ambiguous Matchings, implying that it perform exact matching where a node in the old version has one-to one corrspondence with the corresponding node in the new, by delaying all the others. It drastically reduces the possibility of wrong matchings. X-tree Diff propagates such exact matchings upwards in Step 2, and obtain more matchings downwsards from roots in Step 3. In step 4, nodes to ve inserted or deleted are decided, We aldo show thst X-tree Diff runs on O(n), woere n is the number of noses in X-trees, in worst case as well as in average case, This result is even better than that of BULD Diff algorithm, which is O(n log(n)) in worst case, We experimented X-tree Diff on reat data, which are about 11,000 home pages from about 20 wev sites, instead of synthetic documets manipulated for experimented for ex[erimentation. Currently, X-treeDiff algorithm is being used in a commeercial hacking detection system, called the WIDS(Web-Document Intrusion Detection System), which is to find changes occured in registered websites, and report suspicious changes to users.

Brand Equity and Purchase Intention in Fashion Products: A Cross-Cultural Study in Asia and Europe (상표자산과 구매의도와의 관계에 관한 국제비교연구 - 아시아와 유럽의 의류시장을 중심으로 -)

  • Kim, Kyung-Hoon;Ko, Eun-Ju;Graham, Hooley;Lee, Nick;Lee, Dong-Hae;Jung, Hong-Seob;Jeon, Byung-Joo;Moon, Hak-Il
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.4
    • /
    • pp.245-276
    • /
    • 2008
  • Brand equity is one of the most important concepts in business practice as well as in academic research. Successful brands can allow marketers to gain competitive advantage (Lassar et al.,1995), including the opportunity for successful extensions, resilience against competitors' promotional pressures, and the ability to create barriers to competitive entry (Farquhar, 1989). Branding plays a special role in service firms because strong brands increase trust in intangible products (Berry, 2000), enabling customers to better visualize and understand them. They reduce customers' perceived monetary, social, and safety risks in buying services, which are obstacles to evaluating a service correctly before purchase. Also, a high level of brand equity increases consumer satisfaction, repurchasing intent, and degree of loyalty. Brand equity can be considered as a mixture that includes both financial assets and relationships. Actually, brand equity can be viewed as the value added to the product (Keller, 1993), or the perceived value of the product in consumers' minds. Mahajan et al. (1990) claim that customer-based brand equity can be measured by the level of consumers' perceptions. Several researchers discuss brand equity based on two dimensions: consumer perception and consumer behavior. Aaker (1991) suggests measuring brand equity through price premium, loyalty, perceived quality, and brand associations. Viewing brand equity as the consumer's behavior toward a brand, Keller (1993) proposes similar dimensions: brand awareness and brand knowledge. Thus, past studies tend to identify brand equity as a multidimensional construct consisted of brand loyalty, brand awareness, brand knowledge, customer satisfaction, perceived equity, brand associations, and other proprietary assets (Aaker, 1991, 1996; Blackston, 1995; Cobb-Walgren et al., 1995; Na, 1995). Other studies tend to regard brand equity and other brand assets, such as brand knowledge, brand awareness, brand image, brand loyalty, perceived quality, and so on, as independent but related constructs (Keller, 1993; Kirmani and Zeithaml, 1993). Walters(1978) defined information search as, "A psychological or physical action a consumer takes in order to acquire information about a product or store." But, each consumer has different methods for informationsearch. There are two methods of information search, internal and external search. Internal search is, "Search of information already saved in the memory of the individual consumer"(Engel, Blackwell, 1982) which is, "memory of a previous purchase experience or information from a previous search."(Beales, Mazis, Salop, and Staelin, 1981). External search is "A completely voluntary decision made in order to obtain new information"(Engel & Blackwell, 1982) which is, "Actions of a consumer to acquire necessary information by such methods as intentionally exposing oneself to advertisements, taking to friends or family or visiting a store."(Beales, Mazis, Salop, and Staelin, 1981). There are many sources for consumers' information search including advertisement sources such as the internet, radio, television, newspapers and magazines, information supplied by businesses such as sales people, packaging and in-store information, consumer sources such as family, friends and colleagues, and mass media sources such as consumer protection agencies, government agencies and mass media sources. Understanding consumers' purchasing behavior is a key factor of a firm to attract and retain customers and improving the firm's prospects for survival and growth, and enhancing shareholder's value. Therefore, marketers should understand consumer as individual and market segment. One theory of consumer behavior supports the belief that individuals are rational. Individuals think and move through stages when making a purchase decision. This means that rational thinkers have led to the identification of a consumer buying decision process. This decision process with its different levels of involvement and influencing factors has been widely accepted and is fundamental to the understanding purchase intention represent to what consumers think they will buy. Brand equity is not only companies but also very important asset more than product itself. This paper studies brand equity model and influencing factors including information process such as information searching and information resources in the fashion market in Asia and Europe. Information searching and information resources are influencing brand knowledge that influences consumers purchase decision. Nine research hypotheses are drawn to test the relationships among antecedents of brand equity and purchase intention and relationships among brand knowledge, brand value, brand attitude, and brand loyalty. H1. Information searching influences brand knowledge positively. H2. Information sources influence brand knowledge positively. H3. Brand knowledge influences brand attitude. H4. Brand knowledge influences brand value. H5. Brand attitude influences brand loyalty. H6. Brand attitude influences brand value. H7. Brand loyalty influences purchase intention. H8. Brand value influence purchase intention. H9. There will be the same research model in Asia and Europe. We performed structural equation model analysis in order to test hypotheses suggested in this study. The model fitting index of the research model in Asia was $X^2$=195.19(p=0.0), NFI=0.90, NNFI=0.87, CFI=0.90, GFI=0.90, RMR=0.083, AGFI=0.85, which means the model fitting of the model is good enough. In Europe, it was $X^2$=133.25(p=0.0), NFI=0.81, NNFI=0.85, CFI=0.89, GFI=0.90, RMR=0.073, AGFI=0.85, which means the model fitting of the model is good enough. From the test results, hypotheses were accepted. All of these hypotheses except one are supported. In Europe, information search is not an antecedent of brand knowledge. This means that sales of global fashion brands like jeans in Europe are not expanding as rapidly as in Asian markets such as China, Japan, and South Korea. Young consumers in European countries are not more brand and fashion conscious than their counter partners in Asia. The results have theoretical, practical meaning and contributions. In the fashion jeans industry, relatively few studies examining the viability of cross-national brand equity has been studied. This study provides insight on building global brand equity and suggests information process elements like information search and information resources are working differently in Asia and Europe for fashion jean market.

  • PDF

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

How to improve the accuracy of recommendation systems: Combining ratings and review texts sentiment scores (평점과 리뷰 텍스트 감성분석을 결합한 추천시스템 향상 방안 연구)

  • Hyun, Jiyeon;Ryu, Sangyi;Lee, Sang-Yong Tom
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.219-239
    • /
    • 2019
  • As the importance of providing customized services to individuals becomes important, researches on personalized recommendation systems are constantly being carried out. Collaborative filtering is one of the most popular systems in academia and industry. However, there exists limitation in a sense that recommendations were mostly based on quantitative information such as users' ratings, which made the accuracy be lowered. To solve these problems, many studies have been actively attempted to improve the performance of the recommendation system by using other information besides the quantitative information. Good examples are the usages of the sentiment analysis on customer review text data. Nevertheless, the existing research has not directly combined the results of the sentiment analysis and quantitative rating scores in the recommendation system. Therefore, this study aims to reflect the sentiments shown in the reviews into the rating scores. In other words, we propose a new algorithm that can directly convert the user 's own review into the empirically quantitative information and reflect it directly to the recommendation system. To do this, we needed to quantify users' reviews, which were originally qualitative information. In this study, sentiment score was calculated through sentiment analysis technique of text mining. The data was targeted for movie review. Based on the data, a domain specific sentiment dictionary is constructed for the movie reviews. Regression analysis was used as a method to construct sentiment dictionary. Each positive / negative dictionary was constructed using Lasso regression, Ridge regression, and ElasticNet methods. Based on this constructed sentiment dictionary, the accuracy was verified through confusion matrix. The accuracy of the Lasso based dictionary was 70%, the accuracy of the Ridge based dictionary was 79%, and that of the ElasticNet (${\alpha}=0.3$) was 83%. Therefore, in this study, the sentiment score of the review is calculated based on the dictionary of the ElasticNet method. It was combined with a rating to create a new rating. In this paper, we show that the collaborative filtering that reflects sentiment scores of user review is superior to the traditional method that only considers the existing rating. In order to show that the proposed algorithm is based on memory-based user collaboration filtering, item-based collaborative filtering and model based matrix factorization SVD, and SVD ++. Based on the above algorithm, the mean absolute error (MAE) and the root mean square error (RMSE) are calculated to evaluate the recommendation system with a score that combines sentiment scores with a system that only considers scores. When the evaluation index was MAE, it was improved by 0.059 for UBCF, 0.0862 for IBCF, 0.1012 for SVD and 0.188 for SVD ++. When the evaluation index is RMSE, UBCF is 0.0431, IBCF is 0.0882, SVD is 0.1103, and SVD ++ is 0.1756. As a result, it can be seen that the prediction performance of the evaluation point reflecting the sentiment score proposed in this paper is superior to that of the conventional evaluation method. In other words, in this paper, it is confirmed that the collaborative filtering that reflects the sentiment score of the user review shows superior accuracy as compared with the conventional type of collaborative filtering that only considers the quantitative score. We then attempted paired t-test validation to ensure that the proposed model was a better approach and concluded that the proposed model is better. In this study, to overcome limitations of previous researches that judge user's sentiment only by quantitative rating score, the review was numerically calculated and a user's opinion was more refined and considered into the recommendation system to improve the accuracy. The findings of this study have managerial implications to recommendation system developers who need to consider both quantitative information and qualitative information it is expect. The way of constructing the combined system in this paper might be directly used by the developers.