• Title/Summary/Keyword: Memory Resources

Search Result 542, Processing Time 0.023 seconds

Making a Civil War Surrounding History in Cyber Space Focused on 5·18 Discourses in Ilbe Storehouse (사이버 공간에서의 역사의 내전(內戰)화 '일간베스트저장소'의 5·18 언설을 중심으로)

  • Jung, Soo-Young;Lee, Youngjoo
    • Korean journal of communication and information
    • /
    • v.71
    • /
    • pp.116-154
    • /
    • 2015
  • Officially given a historical signifier as Gwangju Democratic Movement since 1987, far-right and conservative group have restated that $5{\cdot}18$ is a rebellion and a riot that was committed by rebellious elements who obeyed North Korea's command or who were connected with North Korea. As those who had been responsible for the rebellion, revolt and riot were rewarded, far-right and conservatives' collective narrative that a country was born where the pro-North Korea left became dominated aroused extreme hostility towards $5{\cdot}18$. Far-right and conservatives involved in many different fields such as political party, university, press and media and civil group carry out incendiary discourse politics with intention to reestablish history and memory of $5{\cdot}18$ in their own story. Many people at online sites such as Ilbe Storehouse who are considered 'young right wing' is a main route to spread the far-right groups' remarks on $5{\cdot}18$. Ilbe is a main channel to reconstitute and reproduce the far-right conservatives' remarks and information on $5{\cdot}18$. Ilbe is one of main area where remarks of disparagement and ridicule, hostility and hatred on $5{\cdot}18$ unfurl. This study collects $5{\cdot}18$-related remarks and stories unfolded at Ilbe and examines how these remarks and stories make significance as to $5{\cdot}18$ and how information resources which remarks are dependent upon are connected each other. In this process, this study intends to find implications of incendiary politics that echoed of remarks on $5{\cdot}18$ have which at the online site Ilbe and by the far-right conservatives.

  • PDF

A Study of Electromagnetic Coupling Analysis between Dipole Antenna and Transmission Line Using PEEC Method (PEEC 방법을 이용한 다이폴 안테나와 전송선로 사이의 전자기 결합 분석에 관한 연구)

  • Oh, Jeongjoon;Kim, Kwangho;Park, Myeongkoo;Lee, Hosang;Nah, Wansoo
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.28 no.11
    • /
    • pp.902-915
    • /
    • 2017
  • In recent years, mobile devices have become increasingly multi-functional and high performance, resulting in a dramatical increase in processing speed. On the other hand, the size of device is reduced, circuits inside the device are more easily exposed to electromagnetic interference radiated from antenna or adjacent circuits, degrading the system performance. To prevent this, it is necessary to design the device considering the electromagnetic characteristics with EM simulation at the design stage of product. However, the EM simulation takes a long analysis time and require high-level system resources for fast analysis. In this paper, an equivalent circuit modeling method for a round wire is proposed using a PEEC method and the electromagnetic coupling from a dipole antenna to a transmission line is analyzed in frequency domain. And compared with the result of electromagnetic simulator. As a result, PEEC method shows good agreement with those of electromagnetic simulation, in a much more short time.

Linear Resource Sharing Method for Query Optimization of Sliding Window Aggregates in Multiple Continuous Queries (다중 연속질의에서 슬라이딩 윈도우 집계질의 최적화를 위한 선형 자원공유 기법)

  • Baek, Seong-Ha;You, Byeong-Seob;Cho, Sook-Kyoung;Bae, Hae-Young
    • Journal of KIISE:Databases
    • /
    • v.33 no.6
    • /
    • pp.563-577
    • /
    • 2006
  • A stream processor uses resource sharing method for efficient of limited resource in multiple continuous queries. The previous methods process aggregate queries to consist the level structure. So insert operation needs to reconstruct cost of the level structure. Also a search operation needs to search cost of aggregation information in each size of sliding windows. Therefore this paper uses linear structure for optimization of sliding window aggregations. The method comprises of making decision, generation and deletion of panes in sequence. The decision phase determines optimum pane size for holding accurate aggregate information. The generation phase stores aggregate information of data per pane from stream buffer. At the deletion phase, panes are deleted that are no longer used. The proposed method uses resources less than the method where level structures were used as data structures as it uses linear data format. The input cost of aggregate information is saved by calculating only pane size of data though numerous stream data is arrived, and the search cost of aggregate information is also saved by linear searching though those sliding window size is different each other. In experiment, the proposed method has low usage of memory and the speed of query processing is increased.

Markov Chain Properties of Sea Surface Temperature Anomalies at the Southeastern Coast of Korea (한국 남동연안 이상수온의 마르코프 연쇄 성질)

  • Kang, Yong-Q.;Gong, Yeong
    • 한국해양학회지
    • /
    • v.22 no.2
    • /
    • pp.57-62
    • /
    • 1987
  • The Markov chain properties of the sea surface temperature (SST) anomalies, namely, the dependency of the monthly SST anomaly on that of the previous month, are studied based on the SST data for 28years(1957-1984) at 5 stations in the southeastern coast of Korea. Wi classified the monthly SST anomalies at each station into the low, the normal and the high state, and computed transition probabilities between SST anomalies of two successive months The standard deviation of SST anomalies at each station is used as a reference for the classification of SST anomalies into 3states. The transition probability of the normal state to remain in the same state is about 0.8. The transition probability of the high or the low states to remain in the same state is about one half. The SST anomalies have almost no probability to transit from the high (the low) state to the low (the high) state. Statistical tests show that the Markov chain properties of SST anomalies are stationary in tine and homogeneous in space. The multi-step Markov chain analysis shows that the 'memory' of the SST anomalies at the coastal stations remains about 3 months.

  • PDF

Super Resolution Algorithm Based on Edge Map Interpolation and Improved Fast Back Projection Method in Mobile Devices (모바일 환경을 위해 에지맵 보간과 개선된 고속 Back Projection 기법을 이용한 Super Resolution 알고리즘)

  • Lee, Doo-Hee;Park, Dae-Hyun;Kim, Yoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.103-108
    • /
    • 2012
  • Recently, as the prevalence of high-performance mobile devices and the application of the multimedia content are expanded, Super Resolution (SR) technique which reconstructs low resolution images to high resolution images is becoming important. And in the mobile devices, the development of the SR algorithm considering the operation quantity or memory is required because of using the restricted resources. In this paper, we propose a new single frame fast SR technique suitable for mobile devices. In order to prevent color distortion, we change RGB color domain to HSV color domain and process the brightness information V (Value) considering the characteristics of human visual perception. First, the low resolution image is enlarged by the improved fast back projection considering the noise elimination. And at the same time, the reliable edge map is extracted by using the LoG (Laplacian of Gaussian) filtering. Finally, the high definition picture is reconstructed by using the edge information and the improved back projection result. The proposed technique removes effectually the unnatural artefact which is generated during the super resolution restoration, and the edge information which can be lost is amended and emphasized. The experimental results indicate that the proposed algorithm provides better performance than conventional back projection and interpolation methods.

Removal of residual VOCs in a collection chamber using decompression for analysis of large volatile sample

  • Lee, In-Ho;Byun, Chang Kyu;Eum, Chul Hun;Kim, Taewook;Lee, Sam-Keun
    • Analytical Science and Technology
    • /
    • v.34 no.1
    • /
    • pp.23-35
    • /
    • 2021
  • In order to measure the volatile organic compounds (VOCs) of a sample which is too large to use commercially available chamber, a stainless steel vacuum chamber (VC) (with an internal diameter of 205 mm and a height of 50 mm) was manufactured and the temperature of the chamber was controlled using an oven. After concentrating the volatiles of the sample in the chamber by helium gas, it was made possible to remove residual volatile substances present in the chamber under reduced pressure ((2 ± 1) × 10-2 mmHg). The chamber was connected to a purge & trap (P&T) using a 6 port valve to concentrate the VOCs, which were analyzed by gas chromatography-mass spectrometry (GC-MS) after thermal desorption (VC-P&T-GC-MS). Using toluene, the toluene recovery rate of this device was 85 ± 2 %, reproducibility was 5 ± 2 %, and the detection limit was 0.01 ng L-1. The method of removing VOCs remaining in the chamber with helium and the method of removing those with reduced pressure was compared using Korean drinking water regulation (KDWR) VOC Mix A (5 μL of 100 ㎍ mL-1) and butylated hydroxytoluene (BHT, 2 μL of 500 ㎍ mL-1). In case of using helium, which requires a large amount of gas and time, reduced pressure ((2 ± 1) × 10-2 mmHg) only during the GC-MS running time, could remove VOCs and BHT to less than 0.1 % of the original injection concentration. As a result of analyzing volatile substances using VC-P&T-GC-MS of six types of cell phone case, BHT was detected in four types and quantitatively analyzed. Maintaining the chamber at reduced pressure during the GC-MS analysis time eliminated memory effect and did not affect the next sample analysis. The volatile substances in a cell phone case were also analyzed by dynamic headspace (HT3) and GC-MS, and the results of the analysis were compared with those of VC-P&T-GC-MS. Considering the chamber volume and sample weight, the VC-P&T configuration was able to collect volatile substances more efficiently than the HT3. The VC-P&T-GC-MS system is believed to be useful for VOCs measurement of inhomogeneous large sample or devices used inside clean rooms.

Security-Enhanced Local Process Execution Scheme in Cloud Computing Environments (클라우드 컴퓨팅 환경에서 보안성 향상을 위한 로컬 프로세스 실행 기술)

  • Kim, Tae-Hyoung;Kim, In-Hyuk;Kim, Jung-Han;Min, Chang-Woo;Kim, Jee-Hong;Eom, Young-Ik
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.5
    • /
    • pp.69-79
    • /
    • 2010
  • In the current cloud environments, the applications are executed on the remote cloud server, and they also utilize computing resources of the remote cloud server such as physical memory and CPU. Therefore, if remote server is exposed to security threat, every applications in remote server can be victim by several security-attacks. Especially, despite many advantages, both individuals and businesses often have trouble to start the cloud services according to the malicious administrator of the cloud server. We propose a security-enhanced local process executing scheme resolving vulnerability of current cloud computing environments. Since secret data is stored in the local, we can protect secret data from security threats of the cloud server. By utilizing computing resource of local computer instead of remote server, high-secure processes can be set free from vulnerability of remote server.

A Security SoC embedded with ECDSA Hardware Accelerator (ECDSA 하드웨어 가속기가 내장된 보안 SoC)

  • Jeong, Young-Su;Kim, Min-Ju;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.7
    • /
    • pp.1071-1077
    • /
    • 2022
  • A security SoC that can be used to implement elliptic curve cryptography (ECC) based public-key infrastructures was designed. The security SoC has an architecture in which a hardware accelerator for the elliptic curve digital signature algorithm (ECDSA) is interfaced with the Cortex-A53 CPU using the AXI4-Lite bus. The ECDSA hardware accelerator, which consists of a high-performance ECC processor, a SHA3 hash core, a true random number generator (TRNG), a modular multiplier, BRAM, and control FSM, was designed to perform the high-performance computation of ECDSA signature generation and signature verification with minimal CPU control. The security SoC was implemented in the Zynq UltraScale+ MPSoC device to perform hardware-software co-verification, and it was evaluated that the ECDSA signature generation or signature verification can be achieved about 1,000 times per second at a clock frequency of 150 MHz. The ECDSA hardware accelerator was implemented using hardware resources of 74,630 LUTs, 23,356 flip-flops, 32kb BRAM, and 36 DSP blocks.

Data collection strategy for building rainfall-runoff LSTM model predicting daily runoff (강수-일유출량 추정 LSTM 모형의 구축을 위한 자료 수집 방안)

  • Kim, Dongkyun;Kang, Seokkoo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.10
    • /
    • pp.795-805
    • /
    • 2021
  • In this study, after developing an LSTM-based deep learning model for estimating daily runoff in the Soyang River Dam basin, the accuracy of the model for various combinations of model structure and input data was investigated. A model was built based on the database consisting of average daily precipitation, average daily temperature, average daily wind speed (input up to here), and daily average flow rate (output) during the first 12 years (1997.1.1-2008.12.31). The Nash-Sutcliffe Model Efficiency Coefficient (NSE) and RMSE were examined for validation using the flow discharge data of the later 12 years (2009.1.1-2020.12.31). The combination that showed the highest accuracy was the case in which all possible input data (12 years of daily precipitation, weather temperature, wind speed) were used on the LSTM model structure with 64 hidden units. The NSE and RMSE of the verification period were 0.862 and 76.8 m3/s, respectively. When the number of hidden units of LSTM exceeds 500, the performance degradation of the model due to overfitting begins to appear, and when the number of hidden units exceeds 1000, the overfitting problem becomes prominent. A model with very high performance (NSE=0.8~0.84) could be obtained when only 12 years of daily precipitation was used for model training. A model with reasonably high performance (NSE=0.63-0.85) when only one year of input data was used for model training. In particular, an accurate model (NSE=0.85) could be obtained if the one year of training data contains a wide magnitude of flow events such as extreme flow and droughts as well as normal events. If the training data includes both the normal and extreme flow rates, input data that is longer than 5 years did not significantly improve the model performance.

Development of 1ST-Model for 1 hour-heavy rain damage scale prediction based on AI models (1시간 호우피해 규모 예측을 위한 AI 기반의 1ST-모형 개발)

  • Lee, Joonhak;Lee, Haneul;Kang, Narae;Hwang, Seokhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.5
    • /
    • pp.311-323
    • /
    • 2023
  • In order to reduce disaster damage by localized heavy rains, floods, and urban inundation, it is important to know in advance whether natural disasters occur. Currently, heavy rain watch and heavy rain warning by the criteria of the Korea Meteorological Administration are being issued in Korea. However, since this one criterion is applied to the whole country, we can not clearly recognize heavy rain damage for a specific region in advance. Therefore, in this paper, we tried to reset the current criteria for a special weather report which considers the regional characteristics and to predict the damage caused by rainfall after 1 hour. The study area was selected as Gyeonggi-province, where has more frequent heavy rain damage than other regions. Then, the rainfall inducing disaster or hazard-triggering rainfall was set by utilizing hourly rainfall and heavy rain damage data, considering the local characteristics. The heavy rain damage prediction model was developed by a decision tree model and a random forest model, which are machine learning technique and by rainfall inducing disaster and rainfall data. In addition, long short-term memory and deep neural network models were used for predicting rainfall after 1 hour. The predicted rainfall by a developed prediction model was applied to the trained classification model and we predicted whether the rain damage after 1 hour will be occurred or not and we called this as 1ST-Model. The 1ST-Model can be used for preventing and preparing heavy rain disaster and it is judged to be of great contribution in reducing damage caused by heavy rain.