• Title/Summary/Keyword: 데이지

Search Result 283, Processing Time 0.026 seconds

Long-Term Memory and Correct Answer Rate of Foreign Exchange Data (환율데이타의 장기기억성과 정답율)

  • Weon, Sek-Jun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.12
    • /
    • pp.3866-3873
    • /
    • 2000
  • In this paper, we investigates the long-term memory and the Correct answer rate of the foreign exchange data (Yen/Dollar) that is one of economic time series, There are many cases where two kinds of fractal dimensions exist in time series generated from dynamical systems such as AR models that are typical models having a short terrr memory, The sample interval separating from these two dimensions are denoted by kcrossover. Let the fractal dimension be $D_1$ in K < $k^{crossover}$,and $D_2$ in K > $k^{crossover}$ from the statistics mode. In usual, Statistic models have dimensions D1 and D2 such that $D_1$ < $D_2$ and $D_2\cong2$ But it showed a result contrary to this in the real time series such as NIKKEL The exchange data that is one of real time series have relation of $D_1$ > $D_2$ When the interval between data increases, the correlation between data increases, which is quite a peculiar phenomenon, We predict exchange data by neural networks, We confirm that $\beta$ obrained from prediction errors and D calculated from time series data precisely satisfy the relationship $\beta$ = 2-2D which is provided from a non-linear model having fractal dimension, And We identified that the difference of fractal dimension appeaed in the Correct answer rate.

  • PDF

An Extended Scan Path Architecture Based on IEEE 1149.1 (IEEE 1149.1을 이용한 확장된 스캔 경로 구조)

  • Son, U-Jeong;Yun, Tae-Jin;An, Gwang-Seon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.7
    • /
    • pp.1924-1937
    • /
    • 1996
  • In this paper, we propose a ESP(Extended Scan Path) architecture for multi- board testing. The conventional architectures for board testing are single scan path and multi-scan path. In the single scan path architecture, the scan path for test data is just one chain. If the scan path is faulty due to short or open, the test data is not valid. In the multi-scan path architecture, there are additional signals in multi-board testing. So conventional architectures are not adopted to multi-board testing. In the case of the ESP architecture, even though scan paths either short or open, it doesn't affect remaining other scan paths. As a result of executing parallel BIST and IEEE 1149.1 boundary scan test by using, he proposed ESP architecture, we observed to the test time is short compared with the single scan path architecture. Because the ESP architecture uses the common bus, there are not additional signals in multi-board testing. By comparing the ESP architecture with conventional one using ISCAS '85 bench mark circuit, we showed that the architecture has improved results.

  • PDF

A Flow Control Scheme based on Queue Priority (큐의 우선순위에 근거한 흐름제어방식)

  • Lee, Gwang-Jun;Son, Ji-Yeon;Son, Chang-Won
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.1
    • /
    • pp.237-245
    • /
    • 1997
  • In this paper, a flow control mechanism is proposed which is based on the priority control between communication path of a node. In this scheme, demanding length of a data queue for any pre-defined, then each node in that path is forced to maintains buffer size under the limit by controlling priority level of the path. The communication path which requires higher bandwidth sets its demanding queue length smaller. By providing relationship between the priority of a path and length of its queue, the high bandwidth requesting path has a better chance to get high bandwidth by defining the smaller demanding queue size. And also, by forcing a path which has high flow rate to maintain small queue size in the path of the communication, the scheme keep the transmission delay of the path small. The size of the demanding queue of a path is regularly adjusted to meet the applications requirement, and the load status of the network during the life time of the communication. The priority control based on the demanding queue size is also provided in the intermediate nodes as well as the end nodes. By that the flow control can provide a quicker result than end to-end flow control, it provides better performance advantage especially for the high speed network.

  • PDF

Cybertrap : Unknown Attack Detection System based on Virtual Honeynet (Cybertrap : 가상 허니넷 기반 신종공격 탐지시스템)

  • Kang, Dae-Kwon;Hyun, Mu-Yong;Kim, Chun-Suk
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.6
    • /
    • pp.863-871
    • /
    • 2013
  • Recently application of open protocols and external network linkage to the national critical infrastructure has been growing with the development of information and communication technologies. This trend could mean that the national critical infrastructure is exposed to cyber attacks and can be seriously jeopardized when it gets remotely operated or controlled by viruses, crackers, or cyber terrorists. In this paper virtual Honeynet model which can reduce installation and operation resource problems of Honeynet system is proposed. It maintains the merits of Honeynet system and adapts the virtualization technology. Also, virtual Honeynet model that can minimize operating cost is proposed with data analysis and collecting technique based on the verification of attack intention and focus-oriented analysis technique. With the proposed model, new type of attack detection system based on virtual Honeynet, that is Cybertrap, is designed and implemented with the host and data collecting technique based on the verification of attack intention and the network attack pattern visualization technique. To test proposed system we establish test-bed and evaluate the functionality and performance through series of experiments.

Query Processing of Uncertainty Position Using Road Networks for Moving Object Databases (이동체 데이타베이스에서 도로 네트워크를 이용한 불확실 위치데이타의 질의처리)

  • Ahn Sung-Woo;An Kyung-Hwan;Bae Tae-Wook;Hong Bong-Hee
    • Journal of KIISE:Databases
    • /
    • v.33 no.3
    • /
    • pp.283-298
    • /
    • 2006
  • The TPR-tree is the time-parameterized indexing scheme that supports the querying of the current and projected future positions of such moving objects by representing the locations of the objects with their coordinates and velocity vectors. If this index is, however, used in environments that directions and velocities of moving objects, such as vehicles, are very often changed, it increases the communication cost between the server and moving objects because moving objects report their position to the server frequently when the direction and the velocity exceed a threshold value. To preserve the communication cost regularly, there can be used a manner that moving objects report their position to the server periodically. However, the periodical position report also has a problem that lineal time functions of the TPR-tree do not guarantee the accuracy of the object's positions if moving objects change their direction and velocity between position reports. To solve this problem, we propose the query processing scheme and the data structure using road networks for predicting uncertainty positions of moving objects, which is reported to the server periodically. To reduce an uncertainty of the query region, the proposed scheme restricts moving directions of the object to directions of road network's segments. To remove an uncertainty of changing the velocity of objects, it puts a maximum speed of road network segments. Experimental results show that the proposed scheme improves the accuracy for predicting positions of moving objects than other schemes based on the TPR-tree.

Evaluation of Various Slow-release Nitrogen Sources for Growth and Establishment of Poa pratensis on Sand-based Systems (모래지반에서 켄터키블루그래스의 성장과 조성에 미치는 질소의 유형별 효과)

  • Lee, Sang-Kook;Minner, David D.;Christians, Nick E.
    • Asian Journal of Turfgrass Science
    • /
    • v.24 no.2
    • /
    • pp.145-148
    • /
    • 2010
  • Nitrogen (N) is one of the most important nutrients among 17 essential nutrients for maintaining turfgrass color and quality. The slow release fertilizers were initially developed to provide a more consistent release of nitrogen over a longer period and are often used to decrease leaching potential from sandy soils. The goal of this study is to determine if various slow release N sources affect the rate at which turfgrass establishes. Six nitrogen sources were evaluated; Nitroform (38-0-0), Nutralene (40-0-0), Organiform (30-0-0), Sulfur coated urea (SCU, 37-0-0), urea (46-0- 0), and Milorganite (6-0-0). The root zone media was seeded and sodded with 'Limousine' Kentucky bluegrass (Poa pratensis L.). Sodded pots produced 182 to 518 g more clipping dry weight than seeded pots. Among seeded pots, Milorganite produced greater amount of root dry weight than any other N sources. Because the period of turfgrass growth is different between sodded and seeded plots, there were differences on clipping yield and root growth. Overall, high N rate had turf color greater than acceptable color of 6 among seeded pots throughout the study. However, low N rate didn't produce acceptable turf color throughout the study. Based on the result of this tudy, ilorganite would be ecommended for new establishment of Kentucky bluegrass an urea with less clipping yield which can lead to reduce abor.

Technology Analysis on Automatic Detection and Defense of SW Vulnerabilities (SW 보안 취약점 자동 탐색 및 대응 기술 분석)

  • Oh, Sang-Hwan;Kim, Tae-Eun;Kim, HwanKuk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.11
    • /
    • pp.94-103
    • /
    • 2017
  • As automatic hacking tools and techniques have been improved, the number of new vulnerabilities has increased. The CVE registered from 2010 to 2015 numbered about 80,000, and it is expected that more vulnerabilities will be reported. In most cases, patching a vulnerability depends on the developers' capability, and most patching techniques are based on manual analysis, which requires nine months, on average. The techniques are composed of finding the vulnerability, conducting the analysis based on the source code, and writing new code for the patch. Zero-day is critical because the time gap between the first discovery and taking action is too long, as mentioned. To solve the problem, techniques for automatically detecting and analyzing software (SW) vulnerabilities have been proposed recently. Cyber Grand Challenge (CGC) held in 2016 was the first competition to create automatic defensive systems capable of reasoning over flaws in binary and formulating patches without experts' direct analysis. Darktrace and Cylance are similar projects for managing SW automatically with artificial intelligence and machine learning. Though many foreign commercial institutions and academies run their projects for automatic binary analysis, the domestic level of technology is much lower. This paper is to study developing automatic detection of SW vulnerabilities and defenses against them. We analyzed and compared relative works and tools as additional elements, and optimal techniques for automatic analysis are suggested.

A Non-Shared Metadata Management Scheme for Large Distributed File Systems (대용량 분산파일시스템을 위한 비공유 메타데이타 관리 기법)

  • Yun, Jong-Byeon;Park, Yang-Bun;Lee, Seok-Jae;Jang, Su-Min;Yoo, Jae-Soo;Kim, Hong-Yeon;Kim, Young-Kyun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.4
    • /
    • pp.259-273
    • /
    • 2009
  • Most of large-scale distributed file systems decouple a metadata operation from read and write operations for a file. In the distributed file systems, a certain server named a metadata server (MDS) maintains metadata information in file system such as access information for a file, the position of a file in the repository, the namespace of the file system, and so on. But, the existing systems used restrictive metadata management schemes, because most of the distributed file systems designed to focus on the distributed management and the input/output performance of data rather than the metadata. Therefore, in the existing systems, the metadata throughput and expandability of the metadata server are limited. In this paper, we propose a new non-shared metadata management scheme in order to provide the high metadata throughput and scalability for a cluster of MDSs. First, we derive a dictionary partitioning scheme as a new metadata distribution technique. Then, we present a load balancing technique based on the distribution technique. It is shown through various experiments that our scheme outperforms existing metadata management schemes in terms of scalability and load balancing.

A Study of Jazz Piano Techniques about Improvisation (재즈 피아노의 즉흥연주 기법 연구)

  • Sagong, Mi;Cho, Tae-Seon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.2
    • /
    • pp.583-589
    • /
    • 2017
  • The 1900s in New Orleans, the harbor city, was indeed an era of confusion because there were various ethnic groups and races. Songs that had been sung by slaves taken from Africa, Black spiritual music, blues, British folk songs, French folk music, ballet music, Spanish dance music, and the march of military bands were mixed with Rag Time to achieve diversity. This developed the beginning of jazz. While swing jazz was most popular and loved by the public during the 20th century, the bebop preferred the small scale organization of musical instruments and developed as a form of jazz featuring the impromptu musical performances. Later, cool jazz, a new style involving the fast and complicated code progress, emerged with free jazz, which features the fundamental rupture from the tradition of the jazz. Miles Davis, who introduced the rock beat in jazz, started fusion jazz. Although jazz has been named differently depending on the era, the main attraction of jazz lies on improvisation. In other words, despite a small changes in code progress and rhythm, the most important thing the player considers is improvisation. Some famous players who lived in the same era followed the whole atmosphere but each had their own style. So, even when they did play the same song, they revealed their style in solo parts despite the same head.

Achievement of Color Constancy by Eigenvector (고유벡터에 의한 색 일관성의 달성)

  • Kim, Dal-Hyoun;Bak, Jong-Cheon;Jung, Seok-Ju;Kim, Kyung-Ah;Cha, Eun-Jong;Jun, Byoung-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.5
    • /
    • pp.972-978
    • /
    • 2009
  • In order to achieve color constancy, this paper proposes a method that can detect an invariant direction that affects formation of an intrinsic image significantly, using eigenvector in the $\chi$-chromaticity space. Firstly, image is converted into datum in the $\chi$-chromaticity space which was suggested by Finlayson et al. Secondly, it removes datum, like noises, with low probabilities that may affect an invariant direction. Thirdly, so as to detect the invariant direction that is consistent with a principal direction, the eigenvector corresponding to the largest eigenvalue is calculated from datum extracted above. Finally, an intrinsic image is acquired by recovering datum with the detected invariant direction. Test images were used as parts of the image data presented by Barnard et al., and detection performance of invariant direction was compared with that of entropy minimization method. The results of experiment showed that our method detected constant invariant direction since the proposed method had lower standard deviation than the entropy method, and was over three times faster than the compared method in the aspect of detection speed.