• Title/Summary/Keyword: Computer Image Analysis

Search Result 1,424, Processing Time 0.038 seconds

Malware Classification using Dynamic Analysis with Deep Learning

  • Asad Amin;Muhammad Nauman Durrani;Nadeem Kafi;Fahad Samad;Abdul Aziz
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.49-62
    • /
    • 2023
  • There has been a rapid increase in the creation and alteration of new malware samples which is a huge financial risk for many organizations. There is a huge demand for improvement in classification and detection mechanisms available today, as some of the old strategies like classification using mac learning algorithms were proved to be useful but cannot perform well in the scalable auto feature extraction scenario. To overcome this there must be a mechanism to automatically analyze malware based on the automatic feature extraction process. For this purpose, the dynamic analysis of real malware executable files has been done to extract useful features like API call sequence and opcode sequence. The use of different hashing techniques has been analyzed to further generate images and convert them into image representable form which will allow us to use more advanced classification approaches to classify huge amounts of images using deep learning approaches. The use of deep learning algorithms like convolutional neural networks enables the classification of malware by converting it into images. These images when fed into the CNN after being converted into the grayscale image will perform comparatively well in case of dynamic changes in malware code as image samples will be changed by few pixels when classified based on a greyscale image. In this work, we used VGG-16 architecture of CNN for experimentation.

High Efficient Entropy Coding For Edge Image Compression

  • Han, Jong-Woo;Kim, Do-Hyun;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.5
    • /
    • pp.31-40
    • /
    • 2016
  • In this paper, we analyse the characteristics of the edge image and propose a new entropy coding optimized to the compression of the edge image. The pixel values of the edge image have the Gaussian distribution around '0', and most of the pixel values are '0'. By using this analysis, the Zero Block technique is utilized in spatial domain. And the Intra Prediction Mode of the edge image is similar to the mode of the surrounding blocks or likely to be the Planar Mode or the Horizontal Mode. In this paper, we make use of the MPM technique that produces the Intra Prediction Mode with high probability modes. By utilizing the above properties, we design a new entropy coding method that is suitable for edge image and perform the compression. In case the existing compression techniques are applied to edge image, compression ratio is low and the algorithm is complicated as more than necessity and the running time is very long, because those techniques are based on the natural images. However, the compression ratio and the running time of the proposed technique is high and very short, respectively, because the proposed algorithm is optimized to the compression of the edge image. Experimental results indicate that the proposed algorithm provides better visual and PSNR performance up to 11 times than the JPEG.

Non-square colour image scrambling based on two-dimensional Sine-Logistic and Hénon map

  • Zhou, Siqi;Xu, Feng;Ping, Ping;Xie, Zaipeng;Lyu, Xin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.12
    • /
    • pp.5963-5980
    • /
    • 2017
  • Image scrambling is an important technology in information hiding, where the Arnold transformation is widely used. Several researchers have proposed the application of $H{\acute{e}}non$ map in square image scrambling, and certain improved technologies require scrambling many times to achieve a good effect without resisting chosen-plaintext attack although it can be directly applied to non-square images. This paper presents a non-square image scrambling algorithm, which can resist chosen-plaintext attack based on a chaotic two-dimensional Sine Logistic modulation map and $H{\acute{e}}non$ map (2D-SLHM). Theoretical analysis and experimental results show that the proposed algorithm has advantages in terms of key space, efficiency, scrambling degree, ability of anti-attack and robustness to noise interference.

Target Object Image Extraction from 3D Space using Stereo Cameras

  • Yoo, Chae-Gon;Jung, Chang-Sung;Hwang, Chi-Jung
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1678-1680
    • /
    • 2002
  • Stereo matching technique is used in many practical fields like satellite image analysis and computer vision. In this paper, we suggest a method to extract a target object image from a complicated background. For example, human face image can be extracted from random background. This method can be applied to computer vision such as security system, dressing simulation by use of extracted human face, 3D modeling, and security system. Many researches about stereo matching have been performed. Conventional approaches can be categorized into area-based and feature-based method. In this paper, we start from area-based method and apply area tracking using scanning window. Coarse depth information is used for area merging process using area searching data. Finally, we produce a target object image.

  • PDF

Meme Analysis using Image Captioning Model and GPT-4

  • Marvin John Ignacio;Thanh Tin Nguyen;Jia Wang;Yong-Guk Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.628-631
    • /
    • 2023
  • We present a new approach to evaluate the generated texts by Large Language Models (LLMs) for meme classification. Analyzing an image with embedded texts, i.e. meme, is challenging, even for existing state-of-the-art computer vision models. By leveraging large image-to-text models, we can extract image descriptions that can be used in other tasks, such as classification. In our methodology, we first generate image captions using BLIP-2 models. Using these captions, we use GPT-4 to evaluate the relationship between the caption and the meme text. The results show that OPT6.7B provides a better rating than other LLMs, suggesting that the proposed method has a potential for meme classification.

Digital X-ray Imaging in Dentistry (치과에서 디지털 x-선 영상의 이용)

  • Kim Eun-Kyung
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.29 no.2
    • /
    • pp.387-396
    • /
    • 1999
  • In dentistry. RadioVisioGraphy was introduced as a first electronic dental x-ray imaging modality in 1989. Thereafter. many types of direct digital radiographic system have been produced in the last decade. They are based either on charge-coupled device(CCD) or on storage phosphor technology. In addition. new types of digital radiographic system using amorphous selenium. image intensifier etc. are under development. Advantages of digital radiographic system are elimination of chemical processing, reduction in radiation dose. image processing, computer storage. electronic transfer of images and so on. Image processing includes image enhancement. image reconstruction. digital subtraction, etc. Especially digital subtraction and reconstruction can be applied in many aspects of clinical practice and research. Electronic transfer of images enables filmless dental hospital and teleradiology/teledentistry system. Since the first image management and communications system(IMACS) for dentomaxillofacial radiology was reported in 1992. IMACS in dental hospital has been increasing. Meanwhile. researches about computer-assisted diagnosis, such as structural analysis of bone trabecular patterns of mandible. feature extraction, automated identification of normal landmarks on cephalometric radiograph and automated image analysis for caries or periodontitis. have been performed actively in the last decade. Further developments in digital radiographic imaging modalities. image transmission system. imaging processing and automated analysis software will change the traditional clinical dental practice in the 21st century.

  • PDF

COMPARATIVE STUDY OF THREE-DIMENSIONAL RECONSTRUCTIVE IMAGES OF FACIAL BONE USING COMPUTED TOMOGRAPHY (전산화단층상을 이용한 안면골의 3차원재구성상의 비교 연구)

  • Song Nam-Kyu;Koh Kwang-Joon
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.22 no.2
    • /
    • pp.283-290
    • /
    • 1992
  • The purpose of this study was to evaluate the spatial relationship of facial bone more accurately. For this study, the three-dimensional images of dry skull were reconstructed using computer image analysis system and three-dimensional reconstructive program involved CT. The obtained results were as follows: 1. Three-dimensional reconstructive CT results in images that have better resolution and more contrast 2. It showed good marginal images of anatomical structure on both three-dimensional CT and computer image analysis system, but the roof of orbit, the lacrimal bone and the squamous portion of temporal bone were hardly detectable. 3. The partial loss of image data were observed during the regeneration of saved image data on three-dimensional CT. 4. It saved the more time for reconstruction of three-dimensional images using computer image analysis system. But, the capacity of hardware was limited for inputting of image data and three-dimensional reconstructive process. 5. We could observe the spatial relationship between the region of interest and the surrounding structures by three-dimensional reconstructive images without invasive method.

  • PDF

Region of Interest Heterogeneity Assessment for Image using Texture Analysis

  • Park, Yong Sung;Kang, Joo Hyun;Lim, Sang Moo;Woo, Sang-Keun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.11
    • /
    • pp.17-21
    • /
    • 2016
  • Heterogeneity assessment of tumor in oncology is important for diagnosis of cancer and therapy. The aim of this study was performed assess heterogeneity tumor region in PET image using texture analysis. For assessment of heterogeneity tumor in PET image, we inserted sphere phantom in torso phantom. Cu-64 labeled radioisotope was administrated by 156.84 MBq in torso phantom. PET/CT image was acquired by PET/CT scanner (Discovery 710, GE Healthcare, Milwaukee, WI). The texture analysis of PET images was calculated using occurrence probability of gray level co-occurrence matrix. Energy and entropy is one of results of texture analysis. We performed the texture analysis in tumor, liver, and background. Assessment textural features of region-of-interest (ROI) in torso phantom used in-house software. We calculated the textural features of torso phantom in PET image using texture analysis. Calculated entropy in tumor, liver, and background were 5.322, 7.639, and 7.818. The further study will perform assessment of heterogeneity using clinical tumor PET image.

Comparison of personal computer with CT workstation in the evaluation of 3-dimensional CT image of the skull (전산화단층촬영 단말장치와 개인용 컴퓨터에서 재구성한 두부 3차원 전산화단층영상의 비교)

  • Kang Bok-Hee;Kim Kee-Deog;Park Chang-Seo
    • Imaging Science in Dentistry
    • /
    • v.31 no.1
    • /
    • pp.1-7
    • /
    • 2001
  • Purpose : To evaluate the usefulness of the reconstructed 3-dimensional image on the personal computer in comparison with that of the CT workstation by quantitative comparison and analysis. Materials and Methods : The spiral CT data obtained from 27 persons were transferred from the CT workstation to a personal computer, and they were reconstructed as 3-dimensional image on the personal computer using V-works 2.0/sup TM/. One observer obtained the 14 measurements on the reconstructed 3-dimensional image on both the CT workstation and the personal computer. Paired Nest was used to evaluate the intraobserver difference and the mean value of the each measurement on the CT workstation and the personal computer. Pearson correlation analysis and % incongruence were also performed. Results: I-Gn, N-Gn, N-A, N-Ns, B-A, and G-Op did not show any statistically significant difference (p>0.05), B-O, B-N, Eu-Eu, Zy-Zy, Biw, D-D, Orbrd R, and L had statistically significant difference (p<0.05), but the mean values of the differences of all measurements were below 2 mm, except for D-D. The value of correlation coefficient y was greater than 0.95 at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and it was 0.75 at B-O, 0.78 at D-D, and 0.82 at both Orbrd Rand L. The % incongruence was below 4% at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and 7.18%, 10.78%, 4.97%, 5.89% at B-O, D-D, Orbrd Rand L respectively. Conclusion : It can be considered that the utilization of the personal computer has great usefulness in reconstruction of the 3-dimensional image when it comes to the economics, accessibility and convenience, except for thin bones and the landmarks which are difficult to be located.

  • PDF

Caption Extraction in News Video Sequence using Frequency Characteristic

  • Youglae Bae;Chun, Byung-Tae;Seyoon Jeong
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.835-838
    • /
    • 2000
  • Popular methods for extracting a text region in video images are in general based on analysis of a whole image such as merge and split method, and comparison of two frames. Thus, they take long computing time due to the use of a whole image. Therefore, this paper suggests the faster method of extracting a text region without processing a whole image. The proposed method uses line sampling methods, FFT and neural networks in order to extract texts in real time. In general, text areas are found in the higher frequency domain, thus, can be characterized using FFT The candidate text areas can be thus found by applying the higher frequency characteristics to neural network. Therefore, the final text area is extracted by verifying the candidate areas. Experimental results show a perfect candidate extraction rate and about 92% text extraction rate. The strength of the proposed algorithm is its simplicity, real-time processing by not processing the entire image, and fast skipping of the images that do not contain a text.

  • PDF