• Title/Summary/Keyword: Deleted Data

Search Result 220, Processing Time 0.031 seconds

Methods on Recognition and Recovery Process of Censored Areas in Digital Image (디지털영상의 특정영역 인식과 처리 방안)

  • 김감래;김욱남;김훈정
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.20 no.1
    • /
    • pp.1-11
    • /
    • 2002
  • This study set up a purpose in the efficient utilization of security target objects. This purpose is the following: Firstly, this study analyzed problem about deleted areas for security described on aerial photography image. Secondly, this study made clustering and labeling to recognize censored areas of image. Finally, this study tried to maximize various utilizability of digital image data through postprocessing algorithm. Based on these courses, the results of this study appeared that brightness value of image increased depending on topography and quantities of topographic features. It was estimated that these was able to utilized by useful estimative data in judging information of topography and topographic features included in the total image. Besides, in the image recognition and postprocessing, the better result value was not elicited than in a mountainous region. Because it was included that a lots of topography and topographic features was similarly recognized with the process for deletion of the existing security target objects in urban and suburb region. This result appeared that the topography and quantities of topographic features absolutely affected the recognition and processing of image.

Development and Application of Menu Engineering Technique for University Residence Hall Foodservice (대학 기숙사 급식의 메뉴 운영 특성을 고려한 Menu Engineering기법 개발 및 적용)

  • 양일선;이해영;신서영;도현욱
    • Korean Journal of Community Nutrition
    • /
    • v.8 no.1
    • /
    • pp.62-70
    • /
    • 2003
  • This article aims to summarize the development and application of menu engineering technique, 'Menu Engineering Modified by Preference (MEMP)'. The site selected for this project was a foodservice operation in Yonsei University residence hall. Sales and food costs data were collected from the daily sales reports for 1 month, and the survey of food preference was conducted during May, 1999. Statistical data analysis was completed using the SAS/Win 6.12 for descriptive analysis. The calculation for menu analysis were carried out with MS 2000 Excel spreadsheet program. This MEMP technique developed had 6 category criteria and 2 dimensions of the contribution margin (CM) and the menu mix modified% (MMM%) . The MMM% was calculated by the sales volumes and also weighted by food preference. The CM and MMM% for each item were compared with a mean menu CM as well as a 70% rule. Four possible classifications by MEMP were fumed out as 'STAR', 'PLOWHORSE', 'PUZZLE', 'DOG'. 'STAR' items were the most popular and profitable items and required to maintain rigid specifications for quality. The decision actions for 'PLOWHORSE' menu items which were relatively popular, but yield a low menu average CM included combining a plowhorse item with lower cost products and reducing the frequency of serving or serving size. There was a need for 'PUZZLE' items to be changed in the menu combination, improve recipe, and promote menu. The last DOG' items were desired to be deleted. This study demonstrates that menu information can be interpreted more easily with MEMP. The use of MEMP is therefore an effective way to improve management decisions about menu of university residence hall foodservice.

Design and Implementation of Scheduler Applications for Efficient Daily Management (효율적 일상 관리를 위한 일정관리 어플리케이션의 설계와 구현)

  • Park, Eunju;Han, Seungjun;Yoon, Jimin;Lim, Hankyu
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.41-50
    • /
    • 2021
  • According to the progress of IT and data processing technology, the usage of internet increased rapidly and various smart devices are appearing. As such, modern people use smartphone to acquire informations they wish and also on daily life including leisure activity free of place. This study has designed and implemented schedule managing application that can help effective managing of our daily life, such as taking note of schedule and sharing appointments. The schedule managing application in this study offers diary taking, sharing the registrated schedule with other users on kakaotalk, saving the deleted schedule or diary to certain folder when users delete file, continuous alarm of daily schedule function together with schedule registration function. The application which is differentiated to other applications and raised usage is expected to effectively manage the busy everyday life.

An Effective Control Method for Improving Integrity of Mobile Phone Forensics (모바일 포렌식의 무결성 보장을 위한 효과적인 통제방법)

  • Kim, Dong-Guk;Jang, Seong-Yong;Lee, Won-Young;Kim, Yong-Ho;Park, Chang-Hyun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.5
    • /
    • pp.151-166
    • /
    • 2009
  • To prove the integrity of digital evidence on the investigation procedure, the data which is using the MD 5(Message Digest 5) hash-function algorithm has to be discarded, if the integrity was damaged on the investigation. Even though a proof restoration of the deleted area is essential for securing the proof regarding a main phase of a case, it was difficult to secure the decisive evidence because of the damaged evidence data due to the difference between the overall hash value and the first value. From this viewpoint, this paper proposes the novel model for the mobile forensic procedure, named as "E-Finder(Evidence Finder)", to ,solve the existing problem. The E-Finder has 5 main phases and 15 procedures. We compared E-Finder with NIST(National Institute of Standards and Technology) and Tata Elxsi Security Group. This paper thus achieved the development and standardization of the investigation methodology for the mobile forensics.

Production Increase of Milk in Dairy Cow by Metabolic Profile Test (대사판정시험을 이용한 젖소의 우유증산)

  • Lee Chang-Woo;Kim Bonn-Won;Ra Jeong-Chan;Shin Sang-Tae;Kim Doo;Kim Jong-Taik;Hong Soon-Il
    • Journal of Veterinary Clinics
    • /
    • v.10 no.1
    • /
    • pp.65-94
    • /
    • 1993
  • This study examined metabolic profiles of 1349 Holstein cows from 91 commercial herds. Thirteen parameters which are consisted of twelve blood components and body condition score were examined and their mean values. standard deviations and standard limits, which are 80% confidential limits, in each lactational stage were reported. The variations of each parameter affected by season, individual milk yield, adjusted corrected milk yield of herd. and lactation number were also reported. A model of metabolic profile test applicable to this country where the average number of cows in a herd is small as to be fifteen is designed. Metabolic profiles as reflected in each parameter were discussed in relation to adequacy of dietary intake for production, milk production, reproductive performance, and diseases, and the possible measure to improve productivity of dairy cows were proposed. Much of the variation in parameters was due to differences between herds, and less to differences between seasons, differences between individual milk yield, and differences between lactational stages. As the average herd size in this country is small, it is believed that all the cows in a herd must be sampled, and the individual result of each parameter was compared with the standard limit for each lactational stage, and the percentage of cows which are outside the standard limits in a herd was calculated to use as a criteria for evaluation of the herd. Data outside the 99% confidential limits were to be deleted at first, but when the trends of the data outside the 99% confidential limits are same as the trends of the data within 99% confidential limits, the deleted data must be reviewed again, otherwise some important informations would be missed. The mean concentration of blood urea nitrogen in this study was much higher than that was reported in England, U.S.A. and Japan, and it was similar to the upper limits reported in England, U.S.A. and Japan. So it was thought that the concentration of blood urea nitrogen is improper as a criteria for protein intake. The increase of serum total protein cocentration beyond standard limits was due to increase of serum globulin concentration in most of the cows. The correlation coefficient between serum and protein and serum globulin concentration was 0.83. Serum globulin concentration was negatively related to adjusted corrected milk of herd. Serum albumin, calcium and magnessium concentrations were negatively related to adjusted corrected milk of herd, which indicate that high-producing individual or high-producing herd have not taken sufficient protein/amino acids, calcium and magnessium. Packed cell volume was negatively related to adjusted corrected milk of the herd, and the trend was same In each lactational stage. The correlation coefficient between serum and packed cell volume was 0.16 and the correlation was very weak. Blood glucose concentration was lowest in early lactational stage, which indicates negative energy balance in early lactational stage. Blood glucose concentration was negatively related to adjusted corrected milk of herd from peak to late lactational stage, which indicates negative energy balance during the period in high-producing individuals or high-producing herds. Correlation coefficient between serum aspartate aminotransferase activity and serum ${\gamma}$-glutamyltransferase activity was 0.41, and this indicates that serum ${\gamma}$-glutamyltransferase should be included as a parameter of metabolic profile test to evaluate liver function. Body condition score of dairy cows in this country was lower than that of Japan in every lactational stages, and the magnitude of increase in body condition score during middle and late lactational stages was small. Metabolic profile can not be evaluated with solely nutritional intake. When an individual or large percentage of cows in a herd have adnormal values In parameters of metabolic profile test, veterinary clinician and nutritionist should cooperate so as to diagnose diseases and to calculate the e of no운ents simultaneously.

  • PDF

Data allocation and Replacement Method based on The Access Frequency for Improving The Performance of SSD (SSD의 성능향상을 위한 접근빈도에 따른 데이터 할당 및 교체기법)

  • Yang, Yu-Seok;Kim, Deok-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.5
    • /
    • pp.74-82
    • /
    • 2011
  • SSD has a limitation of number of erase/write cycles and does not allow in-place update unlike the hard disk because SSD is composed of an array of NAND flash memory. Thus, FTL is used to effectively manage SSD of having different characteristics from traditional disk. FTL has page, block, log-block mapping method. Among then, when log-block mapping method such as BAST and FAST is used, the performance of SSD is degraded because frequent merge operations cause lots of pages to be copied and deleted. This paper proposes a data allocation and replacement method based on access frequency by allocating PRAM as checking area of access frequency, log blocks, storing region of hot data in SSD. The proposed method can enhance the performance and lifetime of SSD by storing cold data to flash memory and storing log blocks and frequently accessed data to PRAM and then reducing merge and erase operations. Besides, a data replacement method is used to increase utilization of PRAM which has limitation of capacity. The experimental results show that the ratio of erase operations of the proposed method is 46%, 38% smaller than those of BAST and FAST and the write performance of the proposed method is 34%, 19% higher than those of BAST and FAST, and the read performance of the proposed method is 5%, 3% higher than those of BAST and FAST, respectively.

Development of a method for urban flooding detection using unstructured data and deep learing (비정형 데이터와 딥러닝을 활용한 내수침수 탐지기술 개발)

  • Lee, Haneul;Kim, Hung Soo;Kim, Soojun;Kim, Donghyun;Kim, Jongsung
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.12
    • /
    • pp.1233-1242
    • /
    • 2021
  • In this study, a model was developed to determine whether flooding occurred using image data, which is unstructured data. CNN-based VGG16 and VGG19 were used to develop the flood classification model. In order to develop a model, images of flooded and non-flooded images were collected using web crawling method. Since the data collected using the web crawling method contains noise data, data irrelevant to this study was primarily deleted, and secondly, the image size was changed to 224×224 for model application. In addition, image augmentation was performed by changing the angle of the image for diversity of image. Finally, learning was performed using 2,500 images of flooding and 2,500 images of non-flooding. As a result of model evaluation, the average classification performance of the model was found to be 97%. In the future, if the model developed through the results of this study is mounted on the CCTV control center system, it is judged that the respons against flood damage can be done quickly.

Privacy Preserving Data Publication of Dynamic Datasets (프라이버시를 보호하는 동적 데이터의 재배포 기법)

  • Lee, Joo-Chang;Ahn, Sung-Joon;Won, Dong-Ho;Kim, Ung-Mo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.6A
    • /
    • pp.139-149
    • /
    • 2008
  • The amount of personal information collected by organizations and government agencies is continuously increasing. When a data collector publishes personal information for research and other purposes, individuals' sensitive information should not be revealed. On the other hand, published data is also required to provide accurate statistical information for analysis. k-Anonymity and ${\iota}$-diversity models are popular approaches for privacy preserving data publication. However, they are limited to static data release. After a dataset is updated with insertions and deletions, a data collector cannot safely release up-to-date information. Recently, the m-invariance model has been proposed to support re-publication of dynamic datasets. However, the m-invariant generalization can cause high information loss. In addition, if the adversary already obtained sensitive values of some individuals before accessing released information, the m-invariance leads to severe privacy disclosure. In this paper, we propose a novel technique for safely releasing dynamic datasets. The proposed technique offers a simple and effective method for handling inserted and deleted records without generalization. It also gives equivalent degree of privacy preservation to the m-invariance model.

RSP-DS: Real Time Sequential Patterns Analysis in Data Streams (RSP-DS: 데이터 스트림에서의 실시간 순차 패턴 분석)

  • Shin Jae-Jyn;Kim Ho-Seok;Kim Kyoung-Bae;Bae Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.9
    • /
    • pp.1118-1130
    • /
    • 2006
  • Existed pattern analysis algorithms in data streams environment have researched performance improvement and effective memory usage. But when new data streams come, existed pattern analysis algorithms have to analyze patterns again and have to generate pattern tree again. This approach needs many calculations in real situation that needs real time pattern analysis. This paper proposes a method that continuously analyzes patterns of incoming data streams in real time. This method analyzes patterns fast, and thereafter obtains real time patterns by updating previously analyzed patterns. The incoming data streams are divided into several sequences based on time based window. Informations of the sequences are inputted into a hash table. When the number of the sequences are over predefined bound, patterns are analyzed from the hash table. The patterns form a pattern tree, and later created new patterns update the pattern tree. In this way, real time patterns are always maintained in the pattern tree. During pattern analysis, suffixes of both new pattern and existed pattern in the tree can be same. Then a pointer is created from the new pattern to the existed pattern. This method reduce calculation time during duplicated pattern analysis. And old patterns in the tree are deleted easily by FIFO method. The advantage of our algorithm is proved by performance comparison with existed method, MILE, in a condition that pattern is changed continuously. And we look around performance variation by changing several variable in the algorithm.

  • PDF

Memory Efficient Query Processing over Dynamic XML Fragment Stream (동적 XML 조각 스트림에 대한 메모리 효율적 질의 처리)

  • Lee, Sang-Wook;Kim, Jin;Kang, Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.15D no.1
    • /
    • pp.1-14
    • /
    • 2008
  • This paper is on query processing in the mobile devices where memory capacity is limited. In case that a query against a large volume of XML data is processed in such a mobile device, techniques of fragmenting the XML data into chunks and of streaming and processing them are required. Such techniques make it possible to process queries without materializing the XML data in its entirety. The previous schemes such as XFrag[4], XFPro[5], XFLab[6] are not scalable with respect to the increase of the size of the XML data because they lack proper memory management capability. After some information on XML fragments necessary for query processing is stored, it is not deleted even after it becomes of no use. As such, when the XML fragments are dynamically generated and infinitely streamed, there could be no guarantee of normal completion of query processing. In this paper, we address scalability of query processing over dynamic XML fragment stream, proposing techniques of deleting information on XML fragments accumulated during query processing in order to extend the previous schemes. The performance experiments through implementation showed that our extended schemes considerably outperformed the previous ones in memory efficiency and scalability with respect to the size of the XML data.