• 제목/요약/키워드: computer algorithm

검색결과 12,690건 처리시간 0.041초

Selectively Partial Encryption of Images in Wavelet Domain (웨이블릿 영역에서의 선택적 부분 영상 암호화)

  • ;Dujit Dey
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • 제28권6C호
    • /
    • pp.648-658
    • /
    • 2003
  • As the usage of image/video contents increase, a security problem for the payed image data or the ones requiring confidentiality is raised. This paper proposed an image encryption methodology to hide the image information. The target data of it is the result from quantization in wavelet domain. This method encrypts only part of the image data rather than the whole data of the original image, in which three types of data selection methodologies were involved. First, by using the fact that the wavelet transform decomposes the original image into frequency sub-bands, only some of the frequency sub-bands were included in encryption to make the resulting image unrecognizable. In the data to represent each pixel, only MSBs were taken for encryption. Finally, pixels to be encrypted in a specific sub-band were selected randomly by using LFSR(Linear Feedback Shift Register). Part of the key for encryption was used for the seed value of LFSR and in selecting the parallel output bits of the LFSR for random selection so that the strength of encryption algorithm increased. The experiments have been performed with the proposed methods implemented in software for about 500 images, from which the result showed that only about 1/1000 amount of data to the original image can obtain the encryption effect not to recognize the original image. Consequently, we are sure that the proposed are efficient image encryption methods to acquire the high encryption effect with small amount of encryption. Also, in this paper, several encryption scheme according to the selection of the sub-bands and the number of bits from LFSR outputs for pixel selection have been proposed, and it has been shown that there exits a relation of trade-off between the execution time and the effect of the encryption. It means that the proposed methods can be selectively used according to the application areas. Also, because the proposed methods are performed in the application layer, they are expected to be a good solution for the end-to-end security problem, which is appearing as one of the important problems in the networks with both wired and wireless sections.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • 제22권4호
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

Radio location algorithm in microcellular wide-band CDMA environment (마이크로 셀룰라 Wide-band CDMA 환경에서의 위치 추정 알고리즘)

  • Chang, Jin-Weon;Han, Il;Sung, Dan-Keun;Shin, Bung-Chul;Hong, Een-Kee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • 제23권8호
    • /
    • pp.2052-2063
    • /
    • 1998
  • Various full-scale radio location systems have been developed since ground-based radio navigation systems appeared during World War II, and more recently global positioning systems (GPS) have been widely used as a representative location system. In addition, radio location systems based on cellular systems are intensively being studied as cellular services become more and more popular. However, these studies have been focused mainly on macrocellular systems of which based stations are mutually synchronized. There has been no study about systems of which based stations are asynchronous. In this paper, we proposed two radio location algorithms in microcellular CDMA systems of which base stations are asychronous. The one is to estimate the position of a personal station at the center of rectangular shaped area which approximates the realistic common area. The other, as a method based on road map, is to first find candidate positions, the centers of roads pseudo-range-distant from the base station which the personal station belongs to and then is to estimate the position by monitoring the pilot signal strengths of neighboring base stations. We compare these two algorithms with three wide-spread algorithms through computer simulations and investigate interference effect on measuring pseudo ranges. The proposed algorithms require no recursive calculations and yield smaller position error than the existing algorithms because of less affection of non-line-of-signt propagation in microcellular environments.

  • PDF

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • 제21권1호
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

Social Tagging-based Recommendation Platform for Patented Technology Transfer (특허의 기술이전 활성화를 위한 소셜 태깅기반 지적재산권 추천플랫폼)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • 제21권3호
    • /
    • pp.53-77
    • /
    • 2015
  • Korea has witnessed an increasing number of domestic patent applications, but a majority of them are not utilized to their maximum potential but end up becoming obsolete. According to the 2012 National Congress' Inspection of Administration, about 73% of patents possessed by universities and public-funded research institutions failed to lead to creating social values, but remain latent. One of the main problem of this issue is that patent creators such as individual researcher, university, or research institution lack abilities to commercialize their patents into viable businesses with those enterprises that are in need of them. Also, for enterprises side, it is hard to find the appropriate patents by searching keywords on all such occasions. This system proposes a patent recommendation system that can identify and recommend intellectual rights appropriate to users' interested fields among a rapidly accumulating number of patent assets in a more easy and efficient manner. The proposed system extracts core contents and technology sectors from the existing pool of patents, and combines it with secondary social knowledge, which derives from tags information created by users, in order to find the best patents recommended for users. That is to say, in an early stage where there is no accumulated tag information, the recommendation is done by utilizing content characteristics, which are identified through an analysis of key words contained in such parameters as 'Title of Invention' and 'Claim' among the various patent attributes. In order to do this, the suggested system extracts only nouns from patents and assigns a weight to each noun according to the importance of it in all patents by performing TF-IDF analysis. After that, it finds patents which have similar weights with preferred patents by a user. In this paper, this similarity is called a "Domain Similarity". Next, the suggested system extract technology sector's characteristics from patent document by analyzing the international technology classification code (International Patent Classification, IPC). Every patents have more than one IPC, and each user can attach more than one tag to the patents they like. Thus, each user has a set of IPC codes included in tagged patents. The suggested system manages this IPC set to analyze technology preference of each user and find the well-fitted patents for them. In order to do this, the suggeted system calcuates a 'Technology_Similarity' between a set of IPC codes and IPC codes contained in all other patents. After that, when the tag information of multiple users are accumulated, the system expands the recommendations in consideration of other users' social tag information relating to the patent that is tagged by a concerned user. The similarity between tag information of perferred 'patents by user and other patents are called a 'Social Simialrity' in this paper. Lastly, a 'Total Similarity' are calculated by adding these three differenent similarites and patents having the highest 'Total Similarity' are recommended to each user. The suggested system are applied to a total of 1,638 korean patents obtained from the Korea Industrial Property Rights Information Service (KIPRIS) run by the Korea Intellectual Property Office. However, since this original dataset does not include tag information, we create virtual tag information and utilized this to construct the semi-virtual dataset. The proposed recommendation algorithm was implemented with JAVA, a computer programming language, and a prototype graphic user interface was also designed for this study. As the proposed system did not have dependent variables and uses virtual data, it is impossible to verify the recommendation system with a statistical method. Therefore, the study uses a scenario test method to verify the operational feasibility and recommendation effectiveness of the system. The results of this study are expected to improve the possibility of matching promising patents with the best suitable businesses. It is assumed that users' experiential knowledge can be accumulated, managed, and utilized in the As-Is patent system, which currently only manages standardized patent information.

Recent Progress in Air Conditioning and Refrigeration Research: A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2006 (공기조화, 냉동 분야의 최근 연구 동향: 2006년 학회지 논문에 대한 종합적 고찰)

  • Han, Hwa-Taik;Shin, Dong-Sin;Choi, Chang-Ho;Lee, Dae-Young;Kim, Seo-Young;Kwon, Yong-Il
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • 제20권6호
    • /
    • pp.427-446
    • /
    • 2008
  • A review on the papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2006 has been accomplished. Focus has been put on current status of research in the aspect of heating, cooling, ventilation, sanitation and building environments. The conclusions are as follows. (1) The research trends of fluid engineering have been surveyed as groups of general fluid flow, fluid machinery and piping, etc. New research topics include micro heat exchanger and siphon cooling device using nano-fluid. Traditional CFD and flow visualization methods were still popular and widely used in research and development. Studies about diffusers and compressors were performed in fluid machinery. Characteristics of flow and heat transfer and piping optimization were studied in piping systems. (2) The papers on heat transfer have been categorized into heat transfer characteristics, heat exchangers, heat pipes, and two-phase heat transfer. The topics on heat transfer characteristics in general include thermal transport in a cryo-chamber, a LCD panel, a dryer, and heat generating electronics. Heat exchangers investigated include pin-tube type, plate type, ventilation air-to-air type, and heat transfer enhancing tubes. The research on a reversible loop heat pipe, the influence of NCG charging mass on heat transport capacity, and the chilling start-up characteristics in a heat pipe were reported. In two-phase heat transfer area, the studies on frost growth, ice slurry formation and liquid spray cooling were presented. The studies on the boiling of R-290 and the application of carbon nanotubes to enhance boiling were noticeable in this research area. (3) Many studies on refrigeration and air conditioning systems were presented on the practical issues of the performance and reliability enhancement. The air conditioning system with multi indoor units caught attention in several research works. The issues on the refrigerant charge and the control algorithm were treated. The systems with alternative refrigerants were also studied. Carbon dioxide, hydrocarbons and their mixtures were considered and the heat transfer correlations were proposed. (4) Due to high oil prices, energy consumption have been attentioned in mechanical building systems. Research works have been reviewed in this field by grouping into the research on heat and cold sources, air conditioning and cleaning research, ventilation and fire research including tunnel ventilation, and piping system research. The papers involve the promotion of efficient or effective use of energy, which helps to save energy and results in reduced environmental pollution and operating cost. (5) Studies on indoor air quality took a great portion in the field of building environments. Various other subjects such as indoor thermal comfort were also investigated through computer simulation, case study, and field experiment. Studies on energy include not only optimization study and economic analysis of building equipments but also usability of renewable energy in geothermal and solar systems.

3D Histology Using the Synchrotron Radiation Propagation Phase Contrast Cryo-microCT (방사광 전파위상대조 동결미세단층촬영법을 활용한 3차원 조직학)

  • Kim, Ju-Heon;Han, Sung-Mi;Song, Hyun-Ouk;Seo, Youn-Kyung;Moon, Young-Suk;Kim, Hong-Tae
    • Anatomy & Biological Anthropology
    • /
    • 제31권4호
    • /
    • pp.133-142
    • /
    • 2018
  • 3D histology is a imaging system for the 3D structural information of cells or tissues. The synchrotron radiation propagation phase contrast micro-CT has been used in 3D imaging methods. However, the simple phase contrast micro-CT did not give sufficient micro-structural information when the specimen contains soft elements, as is the case with many biomedical tissue samples. The purpose of this study is to develop a new technique to enhance the phase contrast effect for soft tissue imaging. Experiments were performed at the imaging beam lines of Pohang Accelerator Laboratory (PAL). The biomedical tissue samples under frozen state was mounted on a computer-controlled precision stage and rotated in $0.18^{\circ}$ increments through $180^{\circ}$. An X-ray shadow of a specimen was converted into a visual image on the surface of a CdWO4 scintillator that was magnified using a microscopic objective lens(X5 or X20) before being captured with a digital CCD camera. 3-dimensional volume images of the specimen were obtained by applying a filtered back-projection algorithm to the projection images using a software package OCTOPUS. Surface reconstruction and volume segmentation and rendering were performed were performed using Amira software. In this study, We found that synchrotron phase contrast imaging of frozen tissue samples has higher contrast power for soft tissue than that of non-frozen samples. In conclusion, synchrotron radiation propagation phase contrast cryo-microCT imaging offers a promising tool for non-destructive high resolution 3D histology.

Simulation and Post-representation: a study of Algorithmic Art (시뮬라시옹과 포스트-재현 - 알고리즘 아트를 중심으로)

  • Lee, Soojin
    • 기호학연구
    • /
    • 제56호
    • /
    • pp.45-70
    • /
    • 2018
  • Criticism of the postmodern philosophy of the system of representation, which has continued since the Renaissance, is based on a critique of the dichotomy that separates the subjects and objects and the environment from the human being. Interactivity, highlighted in a series of works emerging as postmodern trends in the 1960s, was transmitted to an interactive aspect of digital art in the late 1990s. The key feature of digital art is the possibility of infinite variations reflecting unpredictable changes based on public participation on the spot. In this process, the importance of computer programs is highlighted. Instead of using the existing program as it is, more and more artists are creating and programming their own algorithms or creating unique algorithms through collaborations with programmers. We live in an era of paradigm shift in which programming itself must be considered as a creative act. Simulation technology and VR technology draw attention as a technique to represent the meaning of reality. Simulation technology helps artists create experimental works. In fact, Baudrillard's concept of Simulation defines the other reality that has nothing to do with our reality, rather than a reality that is extremely representative of our reality. His book Simulacra and Simulation refers to the existence of a reality entirely different from the traditional concept of reality. His argument does not concern the problems of right and wrong. There is no metaphysical meaning. Applying the concept of simulation to algorithmic art, the artist models the complex attributes of reality in the digital system. And it aims to build and integrate internal laws that structure and activate the world (specific or individual), that is to say, simulate the world. If the images of the traditional order correspond to the reproduction of the real world, the synthesized images of algorithmic art and simulated space-time are the forms of art that facilitate the experience. The moment of seeing and listening to the work of Ian Cheng presented in this article is a moment of personal experience and the perception is made at that time. It is not a complete and closed process, but a continuous and changing process. It is this active and situational awareness that is required to the audience for the comprehension of post-representation's forms.

Efficient Deep Learning Approaches for Active Fire Detection Using Himawari-8 Geostationary Satellite Images (Himawari-8 정지궤도 위성 영상을 활용한 딥러닝 기반 산불 탐지의 효율적 방안 제시)

  • Sihyun Lee;Yoojin Kang;Taejun Sung;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • 제39권5_3호
    • /
    • pp.979-995
    • /
    • 2023
  • As wildfires are difficult to predict, real-time monitoring is crucial for a timely response. Geostationary satellite images are very useful for active fire detection because they can monitor a vast area with high temporal resolution (e.g., 2 min). Existing satellite-based active fire detection algorithms detect thermal outliers using threshold values based on the statistical analysis of brightness temperature. However, the difficulty in establishing suitable thresholds for such threshold-based methods hinders their ability to detect fires with low intensity and achieve generalized performance. In light of these challenges, machine learning has emerged as a potential-solution. Until now, relatively simple techniques such as random forest, Vanilla convolutional neural network (CNN), and U-net have been applied for active fire detection. Therefore, this study proposed an active fire detection algorithm using state-of-the-art (SOTA) deep learning techniques using data from the Advanced Himawari Imager and evaluated it over East Asia and Australia. The SOTA model was developed by applying EfficientNet and lion optimizer, and the results were compared with the model using the Vanilla CNN structure. EfficientNet outperformed CNN with F1-scores of 0.88 and 0.83 in East Asia and Australia, respectively. The performance was better after using weighted loss, equal sampling, and image augmentation techniques to fix data imbalance issues compared to before the techniques were used, resulting in F1-scores of 0.92 in East Asia and 0.84 in Australia. It is anticipated that timely responses facilitated by the SOTA deep learning-based approach for active fire detection will effectively mitigate the damage caused by wildfires.

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • 제57권1호
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.