• Title/Summary/Keyword: data memory

Search Result 3,346, Processing Time 0.033 seconds

A Proposal for Archives securing Community Memory The Achievements and Limitations of GPH Archives (공동체의 기억을 담는 아카이브를 지향하며 20세기민중생활사연구단 아카이브의 성과와 과제)

  • Kim, Joo-Kwan
    • The Korean Journal of Archival Studies
    • /
    • no.33
    • /
    • pp.85-112
    • /
    • 2012
  • Group for the People without History(GPH) was launched at September 2002 and had worked for around five years with the following purposes; Firstly, GPH collects first-hand data on people's everyday lives based on fieldworks. Secondly, GPH constructs digital archives of the collected data. Thirdly, GPH guarantees the accessibility to the archives for people. And lastly, GPH promotes users to utilize the archived data for the various levels. GPH has influenced on the construction of archives on everyday life history as well as the research areas such as anthropology and social history. What is important is that GPH tried to construct digital archives even before the awareness on archives was not widely spreaded in Korea other than formal sectors. Furthermore, the GPH archives proposed a model of open archives which encouraged the people's participation in and utilization of the archives. GPH also showed the ways in which archived data were used. It had published forty seven books of people's life histories and five photographic books, and held six photographic exhibitions on the basis of the archived data. Though GPH archives had contributed to the ignition of the discussions on archives in various areas as leading civilian archives, it has a few limitations. The most important problem is that the data are vanishing too fast for researchers to collect. It is impossible for researchers to collect the whole data. Secondly, the physical space and hardware for the data storage should be ensured. One of the alternatives to solve the problems revealed in the works of GPH is to construct community archives. Community archives are decentralized archives run by people themselves to preserve their own voices and history. It will guarantee the democratization of archives.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Design of Low-Noise and High-Reliability Differential Paired eFuse OTP Memory (저잡음 · 고신뢰성 Differential Paired eFuse OTP 메모리 설계)

  • Kim, Min-Sung;Jin, Liyan;Hao, Wenchao;Ha, Pan-Bong;Kim, Young-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.10
    • /
    • pp.2359-2368
    • /
    • 2013
  • In this paper, an IRD (internal read data) circuit preventing the reentry into the read mode while keeping the read-out DOUT datum at power-up even if noise such as glitches occurs at signal ports such as an input signal port RD (read) when a power IC is on, is proposed. Also, a pulsed WL (word line) driving method is used to prevent a DC current of several tens of micro amperes from flowing into the read transistor of a differential paired eFuse OTP cell. Thus, reliability is secured by preventing non-blown eFuse links from being blown by the EM (electro-migration). Furthermore, a compared output between a programmed datum and a read-out datum is outputted to the PFb (pass fail bar) pin while performing a sensing margin test with a variable pull-up load in consideration of resistance variation of a programmed eFuse in the program-verify-read mode. The layout size of the 8-bit eFuse OTP IP with a $0.18{\mu}m$ process is $189.625{\mu}m{\times}138.850{\mu}m(=0.0263mm^2)$.

Improvement of Residual Delay Compensation Algorithm of KJJVC (한일상관기의 잔차 지연 보정 알고리즘의 개선)

  • Oh, Se-Jin;Yeom, Jae-Hwan;Roh, Duk-Gyoo;Oh, Chung-Sik;Jung, Jin-Seung;Chung, Dong-Kyu;Oyama, Tomoaki;Kawaguchi, Noriyuki;Kobayashi, Hideyuki;Kawakami, Kazuyuki;Ozeki, Kensuke;Onuki, Hirohumi
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.2
    • /
    • pp.136-146
    • /
    • 2013
  • In this paper, the residual delay compensation algorithm is proposed for FX-type KJJVC. In case of initial version as that design algorithm of KJJVC, the integer calculation and the cos/sin table for the phase compensation coefficient were introduced in order to speed up of calculation. The mismatch between data timing and residual delay phase and also between bit-jump and residual delay phase were found and fixed. In final design of KJJVC residual delay compensation algorithm, the initialization problem on the rotation memory of residual delay compensation was found when the residual delay compensated value was applied to FFT-segment, and this problem is also fixed by modifying the FPGA code. Using the proposed residual delay compensation algorithm, the band shape of cross power spectrum becomes flat, which means there is no significant loss over the whole bandwidth. To verify the effectiveness of proposed residual delay compensation algorithm, we conducted the correlation experiments for real observation data using the simulator and KJJVC. We confirmed that the designed residual delay compensation algorithm is well applied in KJJVC, and the signal to noise ratio increases by about 8%.

A Study on the Verification of the Profile of Seo구s Elderly Stress Scale (SESS) (노인 스트레스 측정 도구(SESS)의 신뢰도 및 타당도 검증 연구)

  • 서현미;유수정;하양숙
    • Journal of Korean Academy of Nursing
    • /
    • v.31 no.1
    • /
    • pp.94-106
    • /
    • 2001
  • The purpose of this study was to verify the use of Seo's Elderly Stress Scale (SESS), which was developed in 1996. Through the modified tool, it is possible to examine the stress of Korean elders and to contribute to the welfare of them. The subjects were 350 elders over 65 years old who live in Seoul, Kwang-Ju, Yang-Ju Gun Kyung-ki Do, Ui-Jong Bu, and Young-Am Kun, Jeun-Ra Nam Do. the data of 331 elders (94%) were analyzed. Data were collected between January and March in 1996 and analyzed using the SPSS Win 8.0. The result are as follows: 1. Items with low correlation with the total items were removed. So 27 items were removed and 37 items remained. This 37 items were death in the family and/or close friends, family member's behavior not meeting expectations, marriage of daughter, marriage of son, friction with daughter- in-law, argument among children, children refuse to live with parent, children leaving home, sex injury or accident, in frequest visits from children and grandchildren, providing care for your daughter or daughter-in-law post-partum, decrease in decision making and authority in home, Lunar new year and the harvest featival, house sitting, working in the house, performing a sacrificial rite, missed birthday, not living with the eldest son, decreased eyesight, decreased strength, decreased memory, sleep pattern changes, thoughts about death, loneliness, decreased hearing, change of dental condition, change in your diet or eating style, difficulty in self care, moving because of disease or aging, argument with friend or neighbour, travel, dealing with the procedure of heritage, loss of money or property, not enough pocket money, hearing on elderly neglect in television or radio, hope of going home and ignorant from others. 2. Overlapped items were discussed by colleagues and were modified. 'marriage of daughter' and 'marriage of son' were modified in 'marriage of children'. 'self injury or accidents' and 'family accidents' were modified in to self or family accidents. 3. Factor analysis was done in order to identify validity and three factors were obtained from the result. The first factor familial relation area, included 17 items. The second factor, physical area, included 9 items. The third factor, psycho-socio-economic area, included 9 items. Cronbach coefficient alpha for the 35 items was .923. 4. Pearson's correlation was .704 between SESS and SOS (Symptoms of Stress) in order to confirm construct validity. Based on the result, the following is suggested; 1. The modified SESS needs to be reverified with elder. 2. Korean elder's health promotion can be made by development of stress intervention which was accurately measured with SESS.

  • PDF

Design of Hardwired Variable Length Decoder for H.264/AVC (하드웨어 구조의 H.264/AVC 가변길이 복호기 설계)

  • Yu, Yong-Hoon;Lee, Chan-Ho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.11
    • /
    • pp.71-76
    • /
    • 2008
  • H.264(or MPEG-4/AVC pt.10) is a high performance video coding standard, and is widely used. Variable length code (VLC) of the H.264 standard compresses data using the statistical distribution of values. A decoder parses the compressed bit stream and searches decoded values in lookup tables, and the decoding process is not easy to implement by hardware. We propose an architecture of variable length decoder(VLD) for the H.264 baseline profile(BP) L4. The CAVLD decodes syntax elements using the combination of arithmetic units and lookup tables for the optimized hardware architecture. A barral shifter and a first 1's detector parse NAL bit stream, and are shared by Exp-Golomb decoder and CAVLD. A FIFO memory between CAVLD and the reorder unit and a buffer at the output of the reorder unit eliminate the bottleneck of data stream. The proposed VLD is designed using Verilog-HDL and is implemented using an FPGA. The synthesis result using a 0.18um standard CMOS technology shows that the gate count is 22,604 and the decoder can process HD($1920{\times}1080$) video at 120MHz.

Timely Sensor Fault Detection Scheme based on Deep Learning (딥 러닝 기반 실시간 센서 고장 검출 기법)

  • Yang, Jae-Wan;Lee, Young-Doo;Koo, In-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.163-169
    • /
    • 2020
  • Recently, research on automation and unmanned operation of machines in the industrial field has been conducted with the advent of AI, Big data, and the IoT, which are the core technologies of the Fourth Industrial Revolution. The machines for these automation processes are controlled based on the data collected from the sensors attached to them, and further, the processes are managed. Conventionally, the abnormalities of sensors are periodically checked and managed. However, due to various environmental factors and situations in the industrial field, there are cases where the inspection due to the failure is not missed or failures are not detected to prevent damage due to sensor failure. In addition, even if a failure occurs, it is not immediately detected, which worsens the process loss. Therefore, in order to prevent damage caused by such a sudden sensor failure, it is necessary to identify the failure of the sensor in an embedded system in real-time and to diagnose the failure and determine the type for a quick response. In this paper, a deep neural network-based fault diagnosis system is designed and implemented using Raspberry Pi to classify typical sensor fault types such as erratic fault, hard-over fault, spike fault, and stuck fault. In order to diagnose sensor failure, the network is constructed using Google's proposed Inverted residual block structure of MobilieNetV2. The proposed scheme reduces memory usage and improves the performance of the conventional CNN technique to classify sensor faults.

Effects of Self-Identification with Threatened In-Group and System Justification on Within-Domain Consumption

  • CHOI, Nak-Hwan
    • The Journal of Industrial Distribution & Business
    • /
    • v.11 no.8
    • /
    • pp.39-49
    • /
    • 2020
  • Purpose: Current study aimed at exploring the roles of system justification in the effects of consumers' self-identification with the threatened social in-group on the within-domain versus across-domain consumption. It focused on whether there are positive effects of both of the self-definition and the self-investment on the in-group system justification, and also explored whether the system justification, in turn, could make positive effects on the consumption. Research design, data and methodology: The self-identification was approached in view of self-definition and self-investment when the in-group was threatened by members of their out group. The empirical study was performed with the single factor within-subject design based on the feeling of the consumers' being threatened when the in-group was criticized by the others. The in-group threatened was accessed from the memory of each of the undergraduate students participating in the empirical study by asking them to remember the events by which their important in-group was perceived to be threatened in their past life. Questionnaire data collected from the undergraduate students were used to verify research hypotheses by structural equation model in Amos 21.0 program. Results: First, the self-definition positively affected the within-domain versus across-domain consumption, but did not affect the in-group system justification. Second, the self-investment positively affected the in-group system justification. Third, the system justification made positive effects on the within-domain versus the across-domain consumption. Therefore, this article could contribute to the development of the theory related to compensatory consumption in the view that there could be the positive mediation roles of system justification in the effects of consumers' self-investment to their in-group on the within-domain versus across-domain consumption when the in-group is threatened. Conclusions: The results of this study could give managerial implications to brand or product marketing managers. How to vitalize consumers' self-definition with, and self-investment to, the threatened in-group is at issue to the marketers when consumers' important in-group was threatened by others. By evoking the in-group-based self-investment to consumers when the in-group was threatened, the marketers should increase the level of the system justification, and the marketers should promote the consumers to recognize that their products or brands are included into the within-self domain.

Application of KOMPSAT-5 SAR Interferometry by using SNAP Software (SNAP 소프트웨어를 이용한 KOMPSAT-5 SAR 간섭기법 구현)

  • Lee, Hoonyol
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_3
    • /
    • pp.1215-1221
    • /
    • 2017
  • SeNtinel's Application Platform (SNAP) is an open source software developed by the European Space Agency and consists of several toolboxes that process data from Sentinel satellite series, including SAR (Synthetic Aperture Radar) and optical satellites. Among them, S1TBX (Sentinel-1 ToolBoX)is mainly used to process Sentinel-1A/BSAR images and interferometric techniques. It provides flowchart processing method such as Graph Builder, and has convenient functions including automatic downloading of DEM (Digital Elevation Model) and image mosaicking. Therefore, if computer memory is sufficient, InSAR (Interferometric SAR) and DInSAR (Differential InSAR) perform smoothly and are widely used recently in the world through rapid upgrades. S1TBX also includes existing SAR data processing functions, and since version 5, the processing capability of KOMPSAT-5 has been added. This paper shows an example of processing the interference technique of KOMPSAT-5 SAR image using S1TBX of SNAP. In the open mine of Tavan Tolgoi in Mongolia, the difference between DEM obtained in KOMPSAT-5 in 2015 and SRTM 1sec DEM obtained in 2000 was analyzed. It was found that the maximum depth of 130 meters was excavated and the height of the accumulated ore is over 70 meters during 15 years. Tidal and topographic InSAR signals were observed in the glacier area near Jangbogo Antarctic Research Station, but SNAP was not able to treat it due to orbit error and DEM error. In addition, several DInSAR images were made in the Iraqi desert region, but many lines appearing in systematic errors were found on coherence images. Stacking for StaMPS application was not possible due to orbit error or program bug. It is expected that SNAP can resolve the problem owing to a surge in users and a very fast upgrade of the software.

A Study on a Coping Method of the Family Caregivers of Demented Patients (치매노인 가족부양자의 대처방법에 관한 연구)

  • You, Kwang-Soo
    • Research in Community and Public Health Nursing
    • /
    • v.13 no.4
    • /
    • pp.648-667
    • /
    • 2002
  • This was a descriptive study designed to identify the level of coping method and its influencing factors on the family caregivers of demented patients, and resolve the family caregivers' level of stress. The data were collected from September 10 to October 10, 2001. Subjects for this study were recruited from four clinics, which were chosen from 15 clinics located in Chunbuk-Do as the study sites because of their cooperation for the study. They were similar in terms of size, the characteristics of the local community. and the population and registration status of the demented patients. The instruments used for the study were as follows: 1. Problematic behaviors of demented patients are measured by the Memory and Behavior Problem Checklist (Zarit, 1980), and the Linguistic Communication Symptoms Questionnaire (Bayles and Tomoeda, 1991) 2. The ability to carry out daily activities was measured using the Barthel Index (1965) and Katz Index (1963), which as well-known ADL assessment methods. 3. Burden was measured using Cost of Care Index by the Kosberg and Cairl (1986). 4. Coping strategy was measured Bell's 18 methods (1977). The data were analyzed using SPSS/PC. The study results were as follows: 1. The total stress score was 2.90 out of a maximum score of 5. The highest score reported was 3.09 on the dimension of restriction of individual and social activities, and the lowest region reported was 2.58 on the dimension of mental and physical health. 2. The total score of the coping method was 2.65 out of a maximum score of 5. The highest score reported was 4.01 on the dimension of thinking that includes an ideation such that it is better than any possible worst case, and the lowest score reported was 1.45 on the dimension of the self-image as a scapegoat. 3. There were significant differences in coping method among the subjects by age (F=2.752 p=0.04), caregiver (F=4.33 p=0.003), care-giving period (F=2.68 p=0.049), and dementia stage (F=2.87 p=0.034). 4. There were highly negative correlations ($\gamma$=-0.301 p=0.000) between problematic behaviors of demented patients and the coping method of their family caregivers. The highest correlation coefficient ($\gamma$=-0.339 p=0.000) was found between aggressive behaviors of the demented patients and the coping method of their family caregivers. 5. There was a low negative correlation ($\gamma$=-0.201 p=0.019) between the ADL of the demented patients and the coping method of their family caregivers. 6. There were highly negative correlations ($\gamma$=-0.213 p=0.005) between stress and the coping method of the family caregivers. The highest correlation was found between financial burden ($\gamma$=-.327 P=.000) and the coping method of the family caregivers. There was no significant correlation among unpleasant aspects of the demented patients, willingness to the demented patients, and the coping method of the family caregivers.

  • PDF