• Title/Summary/Keyword: Image Layer

Search Result 1,202, Processing Time 0.028 seconds

Design of detection method for malicious URL based on Deep Neural Network (뉴럴네트워크 기반에 악성 URL 탐지방법 설계)

  • Kwon, Hyun;Park, Sangjun;Kim, Yongchul
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.5
    • /
    • pp.30-37
    • /
    • 2021
  • Various devices are connected to the Internet, and attacks using the Internet are occurring. Among such attacks, there are attacks that use malicious URLs to make users access to wrong phishing sites or distribute malicious viruses. Therefore, how to detect such malicious URL attacks is one of the important security issues. Among recent deep learning technologies, neural networks are showing good performance in image recognition, speech recognition, and pattern recognition. This neural network can be applied to research that analyzes and detects patterns of malicious URL characteristics. In this paper, performance analysis according to various parameters was performed on a method of detecting malicious URLs using neural networks. In this paper, malicious URL detection performance was analyzed while changing the activation function, learning rate, and neural network structure. The experimental data was crawled by Alexa top 1 million and Whois to build the data, and the machine learning library used TensorFlow. As a result of the experiment, when the number of layers is 4, the learning rate is 0.005, and the number of nodes in each layer is 100, the accuracy of 97.8% and the f1 score of 92.94% are obtained.

GIS Information Generation for Electric Mobility Aids Based on Object Recognition Model (객체 인식 모델 기반 전동 이동 보조기용 GIS 정보 생성)

  • Je-Seung Woo;Sun-Gi Hong;Dong-Seok Park;Jun-Mo Park
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.200-208
    • /
    • 2022
  • In this study, an automatic information collection system and geographic information construction algorithm for the transportation disadvantaged using electric mobility aids are implemented using an object recognition model. Recognizes objects that the disabled person encounters while moving, and acquires coordinate information. It provides an improved route selection map compared to the existing geographic information for the disabled. Data collection consists of a total of four layers including the HW layer. It collects image information and location information, transmits them to the server, recognizes, and extracts data necessary for geographic information generation through the process of classification. A driving experiment is conducted in an actual barrier-free zone, and during this process, it is confirmed how efficiently the algorithm for collecting actual data and generating geographic information is generated.The geographic information processing performance was confirmed to be 70.92 EA/s in the first round, 70.69 EA/s in the second round, and 70.98 EA/s in the third round, with an average of 70.86 EA/s in three experiments, and it took about 4 seconds to be reflected in the actual geographic information. From the experimental results, it was confirmed that the walking weak using electric mobility aids can drive safely using new geographic information provided faster than now.

Developing of latent fingerprint on human skin (생체피부에서의 잠재지문 현출)

  • Lee, Hee-Il;Choi, Mi-Jung;Kim, Jai-Hoon;Park, Sung-Woo
    • Analytical Science and Technology
    • /
    • v.21 no.3
    • /
    • pp.222-228
    • /
    • 2008
  • On living skin the chances of a successfully developing latent fingerprint are very limited. This is due to the fact that continual perspiration and rapid absorption diffuse into the lipophillic layer on skin. A study was conducted to investigate effectively developing method of latent fingerprints on human skin surfaces and pig skin likely corpse's skin. We used commercial fingerprint powder, black powders, black magnetic powder, fluorescence magnetic powder, Cyanoacrylate fuming (CA) and direct lifting methods (lifting paper, glasses and photo glossy paper). Developing of fresh fingerprints on living skin was achieved with S-powderblack, CA fuming and CA fuming following S-powder, fluorescence powder. The other powder tends to overwhelm the latent print and the background. But, latent fingerprint residue was disappeared with time after deposit on a living surface. In case of pig skin likely corpse's skin, latent fingerprint detection was achieved with CA fuming following S-powder and deposited print during 6 hr at $25^{\circ}C$, 40% relative moisture yielded excellent fingerprints with clear ridge details using 1 min CA fuming. And enhancement of fingerprint detection image using forensic light source was achieved.

Improving target recognition of active sonar multi-layer processor through deep learning of a small amounts of imbalanced data (소수 불균형 데이터의 심층학습을 통한 능동소나 다층처리기의 표적 인식성 개선)

  • Young-Woo Ryu;Jeong-Goo Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.225-233
    • /
    • 2024
  • Active sonar transmits sound waves to detect covertly maneuvering underwater objects and detects the signals reflected back from the target. However, in addition to the target's echo, the active sonar's received signal is mixed with seafloor, sea surface reverberation, biological noise, and other noise, making target recognition difficult. Conventional techniques for detecting signals above a threshold not only cause false detections or miss targets depending on the set threshold, but also have the problem of having to set an appropriate threshold for various underwater environments. To overcome this, research has been conducted on automatic calculation of threshold values through techniques such as Constant False Alarm Rate (CFAR) and application of advanced tracking filters and association techniques, but there are limitations in environments where a significant number of detections occur. As deep learning technology has recently developed, efforts have been made to apply it in the field of underwater target detection, but it is very difficult to acquire active sonar data for discriminator learning, so not only is the data rare, but there are only a very small number of targets and a relatively large number of non-targets. There are difficulties due to the imbalance of data. In this paper, the image of the energy distribution of the detection signal is used, and a classifier is learned in a way that takes into account the imbalance of the data to distinguish between targets and non-targets and added to the existing technique. Through the proposed technique, target misclassification was minimized and non-targets were eliminated, making target recognition easier for active sonar operators. And the effectiveness of the proposed technique was verified through sea experiment data obtained in the East Sea.

Highly Doped Nano-crystal Embedded Polymorphous Silicon Thin Film Deposited by Using Neutral Beam Assisted CVD at Room Temperature

  • Jang, Jin-Nyeong;Lee, Dong-Hyeok;So, Hyeon-Uk;Hong, Mun-Pyo
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.154-155
    • /
    • 2012
  • The promise of nano-crystalites (nc) as a technological material, for applications including display backplane, and solar cells, may ultimately depend on tailoring their behavior through doping and crystallinity. Impurities can strongly modify electronic and optical properties of bulk and nc semiconductors. Highly doped dopant also effect structural properties (both grain size, crystal fraction) of nc-Si thin film. As discussed in several literatures, P atoms or radicals have the tendency to reside on the surface of nc. The P-radical segregation on the nano-grain surfaces that called self-purification may reduce the possibility of new nucleation because of the five-coordination of P. In addition, the P doping levels of ${\sim}2{\times}10^{21}\;at/cm^3$ is the solubility limitation of P in Si; the solubility of nc thin film should be smaller. Therefore, the non-activated P tends to segregate on the grain boundaries and the surface of nc. These mechanisms could prevent new nucleation on the existing grain surface. Therefore, most researches shown that highly doped nc-thin film by using conventional PECVD deposition system tended to have low crystallinity, where the formation energy of nucleation should be higher than the nc surface in the intrinsic materials. If the deposition technology that can make highly doped and simultaneously highly crystallized nc at low temperature, it can lead processes of next generation flexible devices. Recently, we are developing a novel CVD technology with a neutral particle beam (NPB) source, named as neutral beam assisted CVD (NBaCVD), which controls the energy of incident neutral particles in the range of 1~300eV in order to enhance the atomic activation and crystalline of thin films at low temperatures. During the formation of the nc-/pm-Si thin films by the NBaCVD with various process conditions, NPB energy directly controlled by the reflector bias and effectively increased crystal fraction (~80%) by uniformly distributed nc grains with 3~10 nm size. In the case of phosphorous doped Si thin films, the doping efficiency also increased as increasing the reflector bias (i.e. increasing NPB energy). At 330V of reflector bias, activation energy of the doped nc-Si thin film reduced as low as 0.001 eV. This means dopants are fully occupied as substitutional site, even though the Si thin film has nano-sized grain structure. And activated dopant concentration is recorded as high as up to 1020 #/$cm^3$ at very low process temperature (< $80^{\circ}C$) process without any post annealing. Theoretical solubility for the higher dopant concentration in Si thin film for order of 1020 #/$cm^3$ can be done only high temperature process or post annealing over $650^{\circ}C$. In general, as decreasing the grain size, the dopant binding energy increases as ratio of 1 of diameter of grain and the dopant hardly be activated. The highly doped nc-Si thin film by low-temperature NBaCVD process had smaller average grain size under 10 nm (measured by GIWAXS, GISAXS and TEM analysis), but achieved very higher activation of phosphorous dopant; NB energy sufficiently transports its energy to doping and crystallization even though without supplying additional thermal energy. TEM image shows that incubation layer does not formed between nc-Si film and SiO2 under later and highly crystallized nc-Si film is constructed with uniformly distributed nano-grains in polymorphous tissues. The nucleation should be start at the first layer on the SiO2 later, but it hardly growth to be cone-shaped micro-size grains. The nc-grain evenly embedded pm-Si thin film can be formatted by competition of the nucleation and the crystal growing, which depend on the NPB energies. In the evaluation of the light soaking degradation of photoconductivity, while conventional intrinsic and n-type doped a-Si thin films appeared typical degradation of photoconductivity, all of the nc-Si thin films processed by the NBaCVD show only a few % of degradation of it. From FTIR and RAMAN spectra, the energetic hydrogen NB atoms passivate nano-grain boundaries during the NBaCVD process because of the high diffusivity and chemical potential of hydrogen atoms.

  • PDF

An Estimation of Concentration of Asian Dust (PM10) Using WRF-SMOKE-CMAQ (MADRID) During Springtime in the Korean Peninsula (WRF-SMOKE-CMAQ(MADRID)을 이용한 한반도 봄철 황사(PM10)의 농도 추정)

  • Moon, Yun-Seob;Lim, Yun-Kyu;Lee, Kang-Yeol
    • Journal of the Korean earth science society
    • /
    • v.32 no.3
    • /
    • pp.276-293
    • /
    • 2011
  • In this study a modeling system consisting of Weather Research and Forecasting (WRF), Sparse Matrix Operator Kernel Emissions (SMOKE), the Community Multiscale Air Quality (CMAQ) model, and the CMAQ-Model of Aerosol Dynamics, Reaction, Ionization, and Dissolution (MADRID) model has been applied to estimate enhancements of $PM_{10}$ during Asian dust events in Korea. In particular, 5 experimental formulas were applied to the WRF-SMOKE-CMAQ (MADRID) model to estimate Asian dust emissions from source locations for major Asian dust events in China and Mongolia: the US Environmental Protection Agency (EPA) model, the Goddard Global Ozone Chemistry Aerosol Radiation and Transport (GOCART) model, and the Dust Entrainment and Deposition (DEAD) model, as well as formulas by Park and In (2003), and Wang et al. (2000). According to the weather map, backward trajectory and satellite image analyses, Asian dust is generated by a strong downwind associated with the upper trough from a stagnation wave due to development of the upper jet stream, and transport of Asian dust to Korea shows up behind a surface front related to the cut-off low (known as comma type cloud) in satellite images. In the WRF-SMOKE-CMAQ modeling to estimate the PM10 concentration, Wang et al.'s experimental formula was depicted well in the temporal and spatial distribution of Asian dusts, and the GOCART model was low in mean bias errors and root mean square errors. Also, in the vertical profile analysis of Asian dusts using Wang et al's experimental formula, strong Asian dust with a concentration of more than $800\;{\mu}g/m^3$ for the period of March 31 to April 1, 2007 was transported under the boundary layer (about 1 km high), and weak Asian dust with a concentration of less than $400\;{\mu}g/m^3$ for the period of 16-17 March 2009 was transported above the boundary layer (about 1-3 km high). Furthermore, the difference between the CMAQ model and the CMAQ-MADRID model for the period of March 31 to April 1, 2007, in terms of PM10 concentration, was seen to be large in the East Asia area: the CMAQ-MADRID model showed the concentration to be about $25\;{\mu}g/m^3$ higher than the CMAQ model. In addition, the $PM_{10}$ concentration removed by the cloud liquid phase mechanism within the CMAQ-MADRID model was shown in the maximum $15\;{\mu}g/m^3$ in the Eastern Asia area.

Measurements of Carotid Intima, Media, and Intima-media Thickness and Their Clinical Importance (경동맥의 내막, 중막, 내중막 두께 분리측정 및 임상적 중요성)

  • Kim Wuon-Shik;Jeong Hwan-Taek;No Ki-Yong;Bae Jang-Ho
    • Progress in Medical Physics
    • /
    • v.16 no.4
    • /
    • pp.207-213
    • /
    • 2005
  • The severity of carotid Intima-media thickness (IMT) is an Independent predictor of atherosclerosis which causes transient cerebral ischemia, stroke, and coronary events such as myocardial Infarction. The IMT consists of Intima thickness (IT) and media thickness (MT). However, the Individual clinical significance of IT and MT has not been well studied. We devised a method of measuring IT, MT, and IMT using B-mode ultrasound Image processing technique for the diagnosis of atherosclerosis. To inspect the clinical significance of IT, MT, and IMT, one hundred forty-four consecutive patients (mean age; 57 years old, 72 males) were underwent common carotid artery scanning using high-resolution ultrasound. Results showed that, the IT (p<0.05), MT (p<0.05) as well as IMT (p<0.01) of patients with atherosclerotic disease were significantly thicker than that of the patients without atherosclerotic disease. Patients with hyperiension showed significantly thicker IT (p<0.01), MT (p<0.001), and IMT (p<0.001). However, only IT was thicker in patients with smoking (p<0.01). The IT (r=0.374, p=0.001), MT (r=0.433, p=0.000), and IMT (r=0.479, p=0.000) showed positive correlation with age. The coefficients of determination ($r^2$) were estimated to be $92.4\%$ for IMT and MT, $49.1\%$ for IMT and IT, and $27.4\%$ for IT and MT. This result suggests that the Intima layer of the carotid artery has a different physiology with the media layer.

  • PDF

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Composition Ratio Analysis of Transesterification Products of Olive Oil by Using Thin Layer Chromatography and Their Applicability to Cosmetics (올리브 오일의 에스터 교환반응 생성물의 TLC를 이용한 조성비 분석 및 화장품에의 응용가능성 평가)

  • Park, So Hyun;Shin, Hyuk Soo;Kim, A Rang;Jeong, Hyo Jin;Xuan, Song Hua;Hong, In Kee;Lee, Dae Bong;Park, Soo Nam
    • Applied Chemistry for Engineering
    • /
    • v.29 no.3
    • /
    • pp.342-349
    • /
    • 2018
  • In this study, the physicochemical properties, emulsifying capacity, moisture content and cytotoxicity of the composite material produced by transesterification reactions of the olive oil (olive oil esters) were investigated for cosmetic applications. Olive oil esters with short (S) and long (L) reaction times were studied. From the TLC-image analysis, composition ratios of the olive oil esters S were found to be 5.2, 24.1, 46.4, and 21.9% for mono-, di-, tri-glyceride, and fatty acid ethyl ester, respectively. Those of the olive oil esters L were 4.1, 24.7, 40.6, and 28.8% for mono-, di-, tri-glyceride, and fatty acid ethyl ester, respectively. The iodine value, acid value, saponification value, unsaponified matter, refractive index, and specific gravity were determined and purity tests were also carried out and normalized to establish standards and testing methods for using olive oil esters in cosmetics. To evaluate their emulsifying capacities, the O/W emulsion was prepared without surfactants and the formation of the emulsified particles were confirmed. After 5 days of applying the olive oil esters to human skin, the skin moisture retention was improved by 13.1% from the initial state. For the evaluation of toxicity on human skin cells, the olive oil esters showed 90% or more of the cell viability at $0.2-200{\mu}g/mL$. These results suggested that olive oil esters can be applied as natural/non-toxic ingredients to cosmetics industries.

A Study on the Frequency of Occurrence of the Aortic Dissection using CT (CT 검사에서 대동맥박리(aortic dissection)의 발생빈도에 관한 고찰)

  • Dong, Kyung-Rae;Choi, Sung-Kwan;Jang, Young-Ill;Ro, Sang-Ho
    • Journal of radiological science and technology
    • /
    • v.31 no.2
    • /
    • pp.115-121
    • /
    • 2008
  • Purpose: Aortic Dissection is very dangerous, prognostic disease, which the bloodstream flow out of the true lumen of the aorta by the bursting of aortic intima resulting in a rapid dissociation of inner and outer layer from the media. It is difficult to diagnose aortic dissection clinically by normal X-ray. This study was to investigate the occurrence frequency by age and number of patients who are identified to be aortic dissection by CT (Computed Tomography) scan. Materials and methods: We investigated the trend of yearly fluctuation, gender, age, and department of clinical research of the 112 patients who conducted CT scan in C- University Hospital for two years from January 2005 to December 2006. The MIP and SSD which reconstructed CT image and the VRT image were obtained for the accurate observation. The result was investigated by comparing normal X-ray and CT scan. Results and Conclusion: 1. The yearly check of 112 patients conducted CT scan showed 37 people (41.9%) in 2005, and it was increased to 65 (58.1%) in 2006 by 1.4 times. 2. The gender distribution of patients given a CT scan showed 45 males (40.1%), and female 67 (59.9 %). The aortic dissection patients were 9 (20%) out of 45 males, 21 (31.3%) out of 67 females and women were 1.6 times more than men. Women are also 1.5 times more than men in the number of examinee. 3. The age distribution of patient's who conducted CT scan revealed that there was no patient under 30 years old while 88.3% of all patients were through 41 to 80 years old. The higher the age was, the higher the occurrence of aortic dissection was. The difference in the occurrence frequency of age was statistically significant (p<0.01). 4. The departments that requested CT scan were the emergency department 46 (41.1%), circulatory internal medicine 37 (33.0%), chest surgery 13 (11.6%), and others 6 (14.3%). The combined ratio of emergency medicine and circulatory internal medicine was 74.1% of all. The results show that the aortic dissection is a very dangerous disease whose patients visit mainly via the emergency room. 5. The aortic dissection patients had normal X-ray readings in 22 (73.3%) out of 30, and only 8 (26.7 percent) are abnormal in the X-ray diagnosis. Therefore, the CT scan needs to be enforced in order to assess accurately the disease of aortic dissection.

  • PDF