• Title/Summary/Keyword: Near-Memory Processing

Search Result 38, Processing Time 0.023 seconds

Design of After-processing Encrypted Record System for Copy Protection of Digital Video Optical Discs (디지털 비디오 광 디스크의 복제방지를 위한 후처리 암호화 기록 장치의 설계)

  • Kim, Hyeong-Woo;Joo, Jae-Hoon;Kim, Jin-Ae;Choi, Jung-Kyeng
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.6
    • /
    • pp.1435-1440
    • /
    • 2010
  • This paper presents encrypted secret code recording system which can insert an unique manufacture ID code after complete disc process. First, we detect a memory block synchronizing signal which is SYNC. by using FPGA, then, design a recording pattern to write Multi Pulse. Finally, a method that any data is recorded in any place in any data area of optical disc by using a FPGA was proposed. Newly proposed method in this paper that any user records user data in protected data areas on digital video optical discs, can be very useful for effective software copy protection, and can be applicable to encrypted record on high density DVD in near future.

Implementation of Data processing of the High Availability for Software Architecture of the Cloud Computing (클라우드 서비스를 위한 고가용성 대용량 데이터 처리 아키텍쳐)

  • Lee, Byoung-Yup;Park, Junho;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.2
    • /
    • pp.32-43
    • /
    • 2013
  • These days, there are more and more IT research institutions which foresee cloud services as the predominant IT service in the near future and there, in fact, are actual cloud services provided by some IT leading vendors. Regardless of physical location of the service and environment of the system, cloud service can provide users with storage services, usage of data and software. On the other hand, cloud service has challenges as well. Even though cloud service has its edge in terms of the extent to which the IT resource can be freely utilized regardless of the confinement of hardware, the availability is another problem to be solved. Hence, this paper is dedicated to tackle the aforementioned issues; prerequisites of cloud computing for distributed file system, open source based Hadoop distributed file system, in-memory database technology and high availability database system. Also the author tries to body out the high availability mass distributed data management architecture in cloud service's perspective using currently used distributed file system in cloud computing market.

Thermal properties and mechanical properties of dielectric materials for thermal imprint lithography

  • Kwak, Jeon-Bok;Cho, Jae-Choon;Ra, Seung-Hyun
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2006.06a
    • /
    • pp.242-242
    • /
    • 2006
  • Increasingly complex tasks are performed by computers or cellular phone, requiring more and more memory capacity as well as faster and faster processing speeds. This leads to a constant need to develop more highly integrated circuit systems. Therefore, there have been numerous studies by many engineers investigating circuit patterning. In particular, PCB including module/package substrates such as FCB (Flip Chip Board) has been developed toward being low profile, low power and multi-functionalized due to the demands on miniaturization, increasing functional density of the boards and higher performances of the electric devices. Imprint lithography have received significant attention due to an alternative technology for photolithography on such devices. The imprint technique. is one of promising candidates, especially due to the fact that the expected resolution limits are far beyond the requirements of the PCB industry in the near future. For applying imprint lithography to FCB, it is very important to control thermal properties and mechanical properties of dielectric materials. These properties are very dependent on epoxy resin, curing agent, accelerator, filler and curing degree(%) of dielectric materials. In this work, the epoxy composites filled with silica fillers and cured with various accelerators having various curing degree(%) were prepared. The characterization of the thermal and mechanical properties wasperformed by thermal mechanical analysis (TMA), thermogravimetric analysis (TGA), differential scanning calorimetry (DSC), rheometer, an universal test machine (UTM).

  • PDF

Real-time 2-D Separable Median Filter (실시간 2차원 Separable 메디안 필터)

  • Jae Gil Jeong
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.3
    • /
    • pp.321-330
    • /
    • 2002
  • A 2-D median filter has many applications in various image and video signal processing areas. The rapid development in VLSI technology makes it possible to implement a real-time or near real-time 2-D median filter with reasonable cost. For the efficient VLSI implementation, the algorithm should have characteristics such as small memory requirements, regular computations, and local data transfers. This paper presents an architecture of the real-time two-dimensional separable median filter which has appropriate characteristics for the VLSI implementation. For the efficient two-dimensional median filter, a separable two-dimensional median filtering structure and a bit-sliced pipelined median searching algorithm are used. A behavioral simulator is implemented with C language and used for the analysis of the presented architecture.

  • PDF

Digital Processing and Acoustic Backscattering Characteristics on the Seafloor Image by Side Scan Sonar (Side Scan Sonar 탐사자료의 영상처리와 해저면 Backscattering 음향특성)

  • 김성렬;유홍룡
    • 한국해양학회지
    • /
    • v.22 no.3
    • /
    • pp.143-152
    • /
    • 1987
  • The digital data were obtained using Kennedy 9000 magnetic tape deck which was connected to the SMS960 side scan sonar during the field operations. The data of three consecutive survey tracks near Seongsan-po, Cheju were used for the development of this study. The softwares were mainly written in Fortran-77 using VAX 11/780 MINI-COMPUTER (CPU Memory; 4MB). The established mapping system consists of the pretreatment and the digital processing of seafloor image data. The pretreatment was necessary because the raw digital data format of the field magnetic tapes was not compatible to the VAX system. Therefore the raw data were read by the personal computer using the Assembler language and the data format was converted to IBM compatible, and next data were communicated to the VAX system. The digital processing includes geometrical correction for slant range, statistical analysis and cartography of the seafloor image. The sound speed in the water column was assumed 1,500 m/sec for the slant range correction and the moving average method was used for the signal trace smoothing. Histograms and cumulative curves were established for the statistical analysis, that was purposed to classify the backscattering strength from the sea-bottom. The seafloor image was displayed on the color screen of the TEKTRONIX 4113B terminal. According to the brief interpretation of the result image map, rocky and sedimentary bottoms were very well discriminated. Also it was shown that the backscattered acoustic pressurecorrelateswith the grain size and sorting of surface sediments.

  • PDF

Data Processing Architecture for Cloud and Big Data Services in Terms of Cost Saving (비용절감 측면에서 클라우드, 빅데이터 서비스를 위한 대용량 데이터 처리 아키텍쳐)

  • Lee, Byoung-Yup;Park, Jae-Yeol;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.5
    • /
    • pp.570-581
    • /
    • 2015
  • In recent years, many institutions predict that cloud services and big data will be popular IT trends in the near future. A number of leading IT vendors are focusing on practical solutions and services for cloud and big data. In addition, cloud has the advantage of unrestricted in selecting resources for business model based on a variety of internet-based technologies which is the reason that provisioning and virtualization technologies for active resource expansion has been attracting attention as a leading technology above all the other technologies. Big data took data prediction model to another level by providing the base for the analysis of unstructured data that could not have been analyzed in the past. Since what cloud services and big data have in common is the services and analysis based on mass amount of data, efficient operation and designing of mass data has become a critical issue from the early stage of development. Thus, in this paper, I would like to establish data processing architecture based on technological requirements of mass data for cloud and big data services. Particularly, I would like to introduce requirements that must be met in order for distributed file system to engage in cloud computing, and efficient compression technology requirements of mass data for big data and cloud computing in terms of cost-saving, as well as technological requirements of open-source-based system such as Hadoop eco system distributed file system and memory database that are available in cloud computing.

Effect of Visual Perception by Vision Therapy for Improvement of Visual Function (시각기능 개선을 위한 시기능훈련이 시지각에 미치는 영향)

  • Lee, Seung Wook;Lee, Hyun Mee
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.20 no.4
    • /
    • pp.491-499
    • /
    • 2015
  • Purpose: This study was to examine how decline of visual function affects visual perception by assessing visual perception after improving visual function through visual training, and observing the change in the cognitive ability of visual perception. Methods: This study analyzes the visual perceptual evaluation (TVPS_R) of 23 children below age 13($8.75{\pm}1.66$) who have visual abnormalities, and improves visual function after conducting vision training (vision therapy) of the children. Results: Convergence increased from average $3.39{\pm}2.52{\Delta}$ (prism) to $13.87{\pm}6.04{\Delta}$ in the measurement of long-distance disparate points, and from average $5.48{\pm}3.42{\Delta}$ to $18.43{\pm}7.58{\Delta}$ in the measurement of short-distance disparate points. Short-distance diplopia points increased from $25.87{\pm}7.33cm$ to $7.48{\pm}2.87cm$, and as for accommodative insufficiency, short-distance blur points increased from $19.57{\pm}7.16cm$ to $7.09{\pm}1.88cm$. In the visual perceptual evaluation performed before and after improving visual function, 6 items except visual memory showed statistically significant improvement. By order of significant improvement, response gap was highest with $17.74{\pm}16.94$(p=0.000) in visual closure, followed by $15.65{\pm}17.11$(p=0.000) in visual sequential-memory, $13.65{\pm}16.63$(p=0.001) in visual figure-ground, $12.74{\pm}18.41$(p=0.003) in visual form-constancy, $6.48{\pm}10.07$ (p=0.005) in visual discrimination, and $4.17{\pm}9.33$(p=0.043) in visual spatial-relationship. In the visual perception quotient that added up these scores, the response gap was $15.22{\pm}8.66$(p=0.000), showing a more significant result. Conclusions: Vision training enables efficient visual processing and improves visual perceptual ability. It was confirmed that improvement of visual function through visual training not only improves abnormal visual function but also affects visual perception of children such as learning, perception and recognition.

Identification of Japanese Black Cattle by the Faces for Precision Livestock Farming (흑소의 얼굴을 이용한 개체인식)

  • 김현태;지전선랑;서률귀구;이인복
    • Journal of Biosystems Engineering
    • /
    • v.29 no.4
    • /
    • pp.341-346
    • /
    • 2004
  • Recent livestock people concern not only increase of production, but also superior quality of animal-breeding environment. So far, the optimization of the breeding and air environment has been focused on the production increase. In the very near future, the optimization will be emphasized on the environment for the animal welfare and health. Especially, cattle farming demands the precision livestock farming and special attention has to be given to the management of feeding, animal health and fertility. The management of individual animal is the first step for precision livestock farming and animal welfare, and recognizing each individual is important for that. Though electronic identification of a cattle such as RFID(Radio Frequency Identification) has many advantages, RFID implementations practically involve several problems such as the reading speed and distance. In that sense, computer vision might be more effective than RFID for the identification of an individual animal. The researches on the identification of cattle via image processing were mostly performed with the cows having black-white patterns of the Holstein. But, the native Korean and Japanese cattle do not have any definite pattern on the body. The purpose of this research is to identify the Japanese black cattle that does not have a body pattern using computer vision technology and neural network algorithm. Twelve heads of Japanese black cattle have been tested to verify the proposed scheme. The values of input parameters were specified and then computed using the face images of cattle. The images of cattle faces were trained using associate neural network algorithm, and the algorithm was verified by the face images that were transformed using brightness, distortion, and noise factors. As a result, there was difference due to transform ratio of the brightness, distortion, and noise. And, the proposed algorithm could identify 100% in the range from -3 to +3 degrees of the brightness, from -2 to +4 degrees of the distortion, and from 0% to 60% of the noise transformed images. It is concluded that our system can not be applied in real time recognition of the moving cows, but can be used for the cattle being at a standstill.