• Title/Summary/Keyword: Memory Information

Search Result 5,217, Processing Time 0.032 seconds

Analyses of Security Issues and Requirements Under Surroundings of Internet of Things (사물인터넷 환경하에서 보안 이슈 및 요구사항 분석)

  • Jung Tae Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.639-647
    • /
    • 2023
  • A variety of communications are developed and advanced by integration of wireless and wire connections with heterogeneous system. Traditional technologies are mainly focus on information technology based on computer techniques in the field of industry, manufacture and automation fields. As new technologies are developed and enhanced with traditional techniques, a lot of new applications are emerged and merged with existing mechanism and skills. The representative applications are IoT(Internet of Things) services and applications. IoT is breakthrough technologies and one of the innovation industries which are called 4 generation industry revolution. Due to limited resources in IoT such as small memory, low power and computing power, IoT devices are vulnerable and disclosed with security problems. In this paper, we reviewed and analyzed security challenges, threats and requirements under IoT service.

A study on Wikidata linkage methods for utilization of digital archive records of the National Debt Redemption Movement (국채보상운동 디지털 아카이브 기록물의 활용을 위한 위키데이터 연계 방안에 대한 연구)

  • Seulki Do;Heejin Park
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.23 no.2
    • /
    • pp.95-115
    • /
    • 2023
  • This study designed a data model linked to Wikidata and examined its applicability to increase the utilization of the digital archive records of the National Debt Redemption Movement, registered as World Memory Heritage, and implications were derived by analyzing the existing metadata, thesaurus, and semantic network graph. Through analysis of the original text of the National Debt Redemption Movement records, key data model classes for linking with Wikidata, such as record item, agent, time, place, and event, were derived. In addition, by identifying core properties for linking between classes and applying the designed data model to actual records, the possibility of acquiring abundant related information was confirmed through movement between classes centered on properties. Thus, this study's result showed that Wikidata's strengths could be utilized to increase data usage in local archives where the scale and management of data are relatively small. Therefore, it can be considered for application in a small-scale archive similar to the National Debt Redemption Movement digital archive.

Developing Cryptocurrency Trading Strategies with Time Series Forecasting Model (시계열 예측 모델을 활용한 암호화폐 투자 전략 개발)

  • Hyun-Sun Kim;Jae Joon Ahn
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.152-159
    • /
    • 2023
  • This study endeavors to enrich investment prospects in cryptocurrency by establishing a rationale for investment decisions. The primary objective involves evaluating the predictability of four prominent cryptocurrencies - Bitcoin, Ethereum, Litecoin, and EOS - and scrutinizing the efficacy of trading strategies developed based on the prediction model. To identify the most effective prediction model for each cryptocurrency annually, we employed three methodologies - AutoRegressive Integrated Moving Average (ARIMA), Long Short-Term Memory (LSTM), and Prophet - representing traditional statistics and artificial intelligence. These methods were applied across diverse periods and time intervals. The result suggested that Prophet trained on the previous 28 days' price history at 15-minute intervals generally yielded the highest performance. The results were validated through a random selection of 100 days (20 target dates per year) spanning from January 1st, 2018, to December 31st, 2022. The trading strategies were formulated based on the optimal-performing prediction model, grounded in the simple principle of assigning greater weight to more predictable assets. When the forecasting model indicates an upward trend, it is recommended to acquire the cryptocurrency with the investment amount determined by its performance. Experimental results consistently demonstrated that the proposed trading strategy yields higher returns compared to an equal portfolio employing a buy-and-hold strategy. The cryptocurrency trading model introduced in this paper carries two significant implications. Firstly, it facilitates the evolution of cryptocurrencies from speculative assets to investment instruments. Secondly, it plays a crucial role in advancing deep learning-based investment strategies by providing sound evidence for portfolio allocation. This addresses the black box issue, a notable weakness in deep learning, offering increased transparency to the model.

Metaverse Artifact Analysis through the Roblox Platform Forensics (메타버스 플랫폼 Roblox 포렌식을 통한 아티팩트 분석)

  • Yiseul Choi;Jeongeun Cho;Eunbeen Lee;Hakkyong Kim;Seongmin Kim
    • Convergence Security Journal
    • /
    • v.23 no.3
    • /
    • pp.37-47
    • /
    • 2023
  • The growth of the metaverse has been accelerated by the increased demand for non-face-to-face interactions due to COVID-19 and advancements in technologies such as blockchain and NFTs. However, with the emergence of various metaverse platforms and the corresponding rise in users, criminal cases such as ransomware attacks, copyright infringements, and sexual offenses have occurred within the metaverse. Consequently, the need for artifacts that can be utilized as digital evidence within metaverse systems has increased. However, there is a lack of information about artifacts that can be used as digital evidence. Furthermore, metaverse security evaluation and forensic analysis are also insufficient, and the absence of attack scenarios and related guidelines makes forensics challenging. To address these issues, this paper presents artifacts that can be used for user behavior analysis and timeline analysis through dynamic analysis of Roblox, a representative metaverse gaming solution. Based on analyzing interrelationship between identified artifacts through memory forensics and log file analysis, this paper suggests the potential usability of artifacts in metaverse crime scenarios. Moreover, it proposes improvements by analyzing the current legal and regulatory aspects to address institutional deficiencies.

Estimation of Image-based Damage Location and Generation of Exterior Damage Map for Port Structures (영상 기반 항만시설물 손상 위치 추정 및 외관조사망도 작성)

  • Banghyeon Kim;Sangyoon So;Soojin Cho
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.5
    • /
    • pp.49-56
    • /
    • 2023
  • This study proposed a damage location estimation method for automated image-based port infrastructure inspection. Memory efficiency was improved by calculating the homography matrix using feature detection technology and outlier removal technology, without going through the 3D modeling process and storing only damage information. To develop an algorithm specialized for port infrastructure, the algorithm was optimized through ground-truth coordinate pairs created using images of port infrastructure. The location errors obtained by applying this to the sample and concrete wall were (X: 6.5cm, Y: 1.3cm) and (X: 12.7cm, Y: 6.4cm), respectively. In addition, by applying the algorithm to the concrete wall and displaying it in the form of an exterior damage map, the possibility of field application was demonstrated.

Real-Time Implementation of Active Classification Using Cumulative Processing (누적처리기법을 이용한 능동표적식별 시스템의 실시간 구현)

  • Park, Gyu-Tae;Bae, Eun-Hyon;Lee, Kyun-Kyung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.87-94
    • /
    • 2007
  • In active sonar system, aspect angle and length of a target can be estimated by calculating the cross-correlation between left and right split-beams of a LFM(Linear Frequency Modulated) signal. However, high-resolution performances in bearing and range are required to estimate the information of a remote target. Because a certain higher sampling frequency than the Nyquist sampling frequency is required in this performance, an over-sampling process through interpolation method should be required. However, real-time implementation of split-beam processing with over-sampled split-beam outputs on a COTS(commercial off-the-shelf) DSP platform limits its performance because of given throughput and memory capacity. This paper proposes a cumulative processing algorithm for split-beam processing to solve the problems. The performance of the proposed method was verified through some simulation tests. Also, the proposed method was implemented as a real-time system using an ADSP-TS101.

Performance Comparison of GMM and HMM Approaches for Bandwidth Extension of Speech Signals (음성신호의 대역폭 확장을 위한 GMM 방법 및 HMM 방법의 성능평가)

  • Song, Geun-Bae;Kim, Austin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.3
    • /
    • pp.119-128
    • /
    • 2008
  • This paper analyzes the relationship between two representative statistical methods for bandwidth extension (BWE): Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM) ones, and compares their performances. The HMM method is a memory-based system which was developed to take advantage of the inter-frame dependency of speech signals. Therefore, it could be expected to estimate better the transitional information of the original spectra from frame to frame. To verify it, a dynamic measure that is an approximation of the 1st-order derivative of spectral function over time was introduced in addition to a static measure. The comparison result shows that the two methods are similar in the static measure, while, in the dynamic measure, the HMM method outperforms explicitly the GMM one. Moreover, this difference increases in proportion to the number of states of HMM model. This indicates that the HMM method would be more appropriate at least for the 'blind BWE' problem. On the other hand, nevertheless, the GMM method could be treated as a preferable alternative of the HMM one in some applications where the static performance and algorithm complexity are critical.

A study on the Linkage of Volatility in Stock Markets under Global Financial Crisis (글로벌 금융위기하에서 주식시장 변동성의 연관성에 대한 연구)

  • Lee, Kyung-Hee;Kim, Kyung-Soo
    • Management & Information Systems Review
    • /
    • v.33 no.1
    • /
    • pp.139-155
    • /
    • 2014
  • This study is to examine the linkage of volatility between changes in the stock market of India and other countries through the integration of the world economy. The results were as follows: First, autocorrelation or serial correlation did not exist in the classic RS model, but long-term memory was present in the modified RS model. Second, unit root did not exist in the unit root test for all periods, and the series were a stable explanatory power and a long-term memory with the normal conditions in the ARFIMA model. Third, in the multivariate asymmetric BEKK and VAR model before the financial crisis, it showed that there was a strong influence of the own market of Taiwan and UK in the conditional mean equation, and a strong spillover effect from Japan to India, from Taiwan to China(Korea, US), from US(Japan) to UK in one direction. In the conditional variance equation, GARCH showed a strong spillover effect that indicated the same direction as the result of ARCH coefficient of the market itself. Asymmetric effects in three home markets and between markets existed. Fourth, after the financial crisis, in the conditional mean equation, only the domestic market in Taiwan showed strong influences, and strong spillover effects existed from India to US, from Taiwan to Japan, from Korea to Germany in one direction. In the conditional variance equation, strong spillover effects were the same as the result of the pre-crisis and asymmetric effect in the domestic market in UK was present, and one-way asymmetric effect existed in Germany from Taiwan. Therefore, the results of this study presented the linkage between the volatilities of the stock market of India and other countries through the integration of the world economy, observing and confirming the asymmetric reactions and return(volatility) spillover effects between the stock market of India and other countries.

  • PDF

The study of stereoscopic editing process with applying depth information (깊이정보를 활용한 입체 편집 프로세스 연구)

  • Baek, Kwang-Ho;Kim, Min-Seo;Han, Myung-Hee
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.225-233
    • /
    • 2012
  • The 3D stereoscopic image contents have been emerging as the blue chip of the contents market of the next generation since the . However, all the 3D contents created commercially in the country have failed to enter box office. It is because the quality of Korean 3D contents is much lower than that of overseas contents and also current 3D post production process is based on 2D. Considering all these facts, the 3D editing process has connection with the quality of contents. The current 3D editing processes of the production case of are using the way that edits with the system on basis of 2D, followed by checking with 3D display system and modifying, if there are any problems. In order to improve those conditions, I suggest that the 3D editing process contain more objectivity by visualizing the depth data applied in some composition work such as Disparity map, Depth map, and the current 3D editing process. The proposed process has been used in the music drama , comparing with those of the film . The 3D values could be checked among cuts which have been changed a lot since those of , while the 3D value of drew an equal result in general. Since the current process is based on an artist's subjective sense of 3D, it could be changed according to the condition and state of the artist. Furthermore, it is impossible for us to predict the positive range, so it is apprehended that the cubic effect of space might be perverted by showing each different 3D value according to cuts in the same space or a limited space. On the other hand, the objective 3D editing by applying the visualization of depth data can adjust itself to the cubic effect of the same space and the whole content equally, which will enrich the 3D contents. It will even be able to solve some problems such as distortion of cubic effect and visual fatigue, etc.

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.