• Title/Summary/Keyword: Paper Quality

Search Result 18,765, Processing Time 0.053 seconds

A preliminary study and its application for the development of the quantitative evaluation method of developed fingerprints on porous surfaces using densitometric image analysis (다공성 표면에서 현출된 지문의 정량적인 평가방법 개발을 위한 농도계 이미지 분석을 이용한 선행연구 및 응용)

  • Cho, Jae-Hyun;Kim, Hyo-Won;Kim, Min-Sun;Choi, Sung-Woon
    • Analytical Science and Technology
    • /
    • v.29 no.3
    • /
    • pp.142-153
    • /
    • 2016
  • In crime scene investigation, fingerprint identification is regarded to be one of the most important techniques for personal identification. However, objective and unbiased evaluation methods that would compare the fingerprints with diverse available and developing methods are currently lacking. To develop an objective and quantitative method to improve fingerprint evaluation, a preliminary study was performed to extract useful research information from the analysis with densitometric image analysis (CP Atlas 2.0) and the Automated Fingerprint Identification System (AFIS) for the developed fingerprints on porous surfaces. First, inked fingerprints obtained by varying pressure (kg.f) and pressing time (sec.) to find optimal conditions for obtaining fingerprint samples were analyzed, because they could provide fingerprints of a relatively uniform quality. The extracted number of minutiae from the analysis with AFIS was compared with the calculated areas of friction ridge peaks from the image analysis. Inked fingerprints with a pressing pressure of 1.0 kg.f for 5 seconds provided the most visually clear fingerprints, the highest number of minutiae points, and the largest average area of the peaks of the friction ridge. In addition, the images of the developed latent fingerprints on thermal paper with the iodine fuming method were analyzed. Fingerprinting condition of 1.0 kg.f/5 sec was also found to be optimal when generating highest minutiae number and the largest average area of peaks of ridges. Additionally, when the concentration of ninhydrin solution (0.5 % vs. 5 %) was used to compare the developed latent fingerprints on print paper, the best fingerprinting condition was 2.0 kg.f/5 sec and 5 % of ninhydrin concentration. It was confirmed that the larger the average area of the peaks generated by the image analysis, the higher the number of minutiae points was found. With additional tests for fingerprint evaluation using the densitometric image analysis, this method can prove to be a new quantitative and objective assessment method for fingerprint development.

Future Development Strategies for KODISA Journals: Overview of 2016 and Strategic Plans for the Future (KODISA 학술지 성장전략: 2016 개관 및 미래 성장개요)

  • Hwang, Hee-Joong;Lee, Jung-Wan;Youn, Myoung-Kil;Kim, Dong-Ho;Lee, Jong-Ho;Shin, Dong-Jin;Kim, Byung-Goo;Kim, Tae-Joong;Lee, Yong-Ki;Kim, Wan-Ki
    • Journal of Distribution Science
    • /
    • v.15 no.5
    • /
    • pp.75-83
    • /
    • 2017
  • Purpose - With the rise of the fourth industrial revolution, it has converged with the existing industrial revolution to give shape to increased accessibility of knowledge and information. As a result, it has become easier for scholars to actively persue and compile research in various fields. This current study aims to focus and assess the current standing of KODISA: the Journal of Distribution Science (JDS), International Journal of Industrial Distribution & Business(IJIDB), the East Asian Journal of Business Management (EAJBM), the Journal of Asian Finance, Economics and Business (JAFEB) in a rapidly evolving era. Novel strategies for creating the future vision of KODISA 2020 will also be examined. Research design, data, and methodology - The current research will analyze published journals of KODISA in order to offer a vision for the KODISA 2020 future. In part 1, this paper will observe the current address of the KODISA journal and its overview of past achievements. Next, part 2 will discuss the activities that will be needed for journals of KODISA, JDS, IJIDB, EAJBM, JAFEB to branch out internationally and significant journals will be statistically analyzed in part 3. The last part 4 will offer strategies for the continued growth of KODISA and visions for KODISA 2020. Results - Among the KODISA publications, IJIDB was second, JDS was 23rd (in economic publications of 54 journals), and EAJBM was 22nd (out of 79 publications in management field journals). This shows the high quality of the KODISA publication journals. According to 2016 publication analysis, JDS, IJIDB, etc. each had 157 publications, 15 publications, 16 publications, and 28 publications. In the case of JDS, it showed an increase of 14% compared to last year. Additionally, JAFEB showed a significant increase of 68%. This shows that compared to other journals, it had a higher rate of paper submission. IJIDB and EAJBM did not show any significant increases. In JDS, it showed many studies related to the distribution, management of distribution, and consumer behavior. In order to increase the status of the KODISA journal to a SCI status, many more international conferences will open to increase its international recognition levels. Second, the systematic functions of the journal will be developed further to increase its stability. Third, future graduate schools will open to foster future potential leaders in this field and build a platform for innovators and leaders. Conclusions - In KODISA, JDS was first published in 1999, and has been registered in SCOPUS February 2017. Other sister publications within the KODISA are preparing for SCOPUS registration as well. KODISA journals will prepare to be an innovative journal for 2020 and the future beyond.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Study of Motion Effects in Cartesian and Spiral Parallel MRI Using Computer Simulation (컴퓨터 시뮬레이션을 이용한 직각좌표 및 나선주사 방식의 병렬 자기공명 영상에서 움직임 효과 연구)

  • Park, Sue-Kyeong;Ahn, Chang-Beom;Sim, Dong-Gyu;Park, Ho-Chong
    • Investigative Magnetic Resonance Imaging
    • /
    • v.12 no.2
    • /
    • pp.123-130
    • /
    • 2008
  • Purpose : Motion effects in parallel magnetic resonance imaging (MRI) are investigated. Parallel MRI is known to be robust to motion due to its reduced acquisition time. However, if there are some involuntary motions such as heart or respiratory motions involved during the acquisition of the parallel MRI, motion artifacts would be even worse than those in conventional (non-parallel) MRI. In this paper, we defined several types of motions, and their effects in parallel MRI are investigated in comparisons with conventional MRI. Materials and Methods : In order to investigate motion effects in parallel MRI, 5 types of motions are considered. Type-1 and 2 are periodic motions with different amplitudes and periods. Type-3 and 4 are segment-based linear motions, where they are stationary during the segment. Type-5 is a uniform random motion. For the simulation, Cartesian and spiral grid based parallel and non-parallel (conventional) MRI are used. Results : Based on the motions defined, moving artifacts in the parallel and non-parallel MRI are investigated. From the simulation, non-parallel MRI shows smaller root mean square error (RMSE) values than the parallel MRI for the periodic (type-1 and 2) motions. Parallel MRI shows less motion artifacts for linear(type-3 and 4) motions where motions are reduced with shorter acquisition time. Similar motion artifacts are observed for the random motion (type-5). Conclusion : In this paper, we simulate the motion effects in parallel MRI. Parallel MRI is effective in the reduction of motion artifacts when motion is reduced by the shorter acquisition time. However, conventional MRI shows better image quality than the parallel MRI when fast periodic motions are involved.

  • PDF

GIS-based Market Analysis and Sales Management System : The Case of a Telecommunication Company (시장분석 및 영업관리 역량 강화를 위한 통신사의 GIS 적용 사례)

  • Chang, Nam-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.61-75
    • /
    • 2011
  • A Geographic Information System(GIS) is a system that captures, stores, analyzes, manages and presents data with reference to geographic location data. In the later 1990s and earlier 2000s it was limitedly used in government sectors such as public utility management, urban planning, landscape architecture, and environmental contamination control. However, a growing number of open-source packages running on a range of operating systems enabled many private enterprises to explore the concept of viewing GIS-based sales and customer data over their own computer monitors. K telecommunication company has dominated the Korean telecommunication market by providing diverse services, such as high-speed internet, PSTN(Public Switched Telephone Network), VOLP (Voice Over Internet Protocol), and IPTV(Internet Protocol Television). Even though the telecommunication market in Korea is huge, the competition between major services providers is growing more fierce than ever before. Service providers struggled to acquire as many new customers as possible, attempted to cross sell more products to their regular customers, and made more efforts on retaining the best customers by offering unprecedented benefits. Most service providers including K telecommunication company tried to adopt the concept of customer relationship management(CRM), and analyze customer's demographic and transactional data statistically in order to understand their customer's behavior. However, managing customer information has still remained at the basic level, and the quality and the quantity of customer data were not enough not only to understand the customers but also to design a strategy for marketing and sales. For example, the currently used 3,074 legal regional divisions, which are originally defined by the government, were too broad to calculate sub-regional customer's service subscription and cancellation ratio. Additional external data such as house size, house price, and household demographics are also needed to measure sales potential. Furthermore, making tables and reports were time consuming and they were insufficient to make a clear judgment about the market situation. In 2009, this company needed a dramatic shift in the way marketing and sales activities, and finally developed a dedicated GIS_based market analysis and sales management system. This system made huge improvement in the efficiency with which the company was able to manage and organize all customer and sales related information, and access to those information easily and visually. After the GIS information system was developed, and applied to marketing and sales activities at the corporate level, the company was reported to increase sales and market share substantially. This was due to the fact that by analyzing past market and sales initiatives, creating sales potential, and targeting key markets, the system could make suggestions and enable the company to focus its resources on the demographics most likely to respond to the promotion. This paper reviews subjective and unclear marketing and sales activities that K telecommunication company operated, and introduces the whole process of developing the GIS information system. The process consists of the following 5 modules : (1) Customer profile cleansing and standardization, (2) Internal/External DB enrichment, (3) Segmentation of 3,074 legal regions into 46,590 sub_regions called blocks, (4) GIS data mart design, and (5) GIS system construction. The objective of this case study is to emphasize the need of GIS system and how it works in the private enterprises by reviewing the development process of the K company's market analysis and sales management system. We hope that this paper suggest valuable guideline to companies that consider introducing or constructing a GIS information system.

Evaluation of Dosimetric Characteristics of Small Field in Cone Versus Square Fields Based on Linear Accelerators(LINAC) for Stereotactic Radiosugery(SRS) (선형가속기를 기반으로 한 뇌정위 방사선 수술 시 전용 콘과 정방형 소조사면의 선량 특성에 관한 고찰)

  • Yoon, Joon;Lee, Gui-Won;Park, Byung-Moon
    • Journal of radiological science and technology
    • /
    • v.33 no.1
    • /
    • pp.61-66
    • /
    • 2010
  • In this paper we evaluated small field dose characteristics of exclusive cone fields versus square fields for stereotactic radiosugery (SRS) which is based on linear accelerators (LINAC). For this test, we used a small beam detector (stereotactic fields detector : SFD) with a 6 MV photon beam and a water phantom system (IBA, Germany). Percentage depth dose (PDD) was measured for different field sets (cones : ${\Phi}1\;cm$, ${\Phi}2\;cm$, ${\Phi}3\;cm$ ; square fields : $1{\times}1\;cm^2$, $2{\times}2\;cm^2$, $3{\times}3\;cm^2$) at a source skin distance (SSD) of 100 cm. We measured the point depths at 1.5 cm, 5 cm, 10 cm, 20 cm, and 30 cm. The output factors were measured under the same geometrical conditions of the PDD and normalized at the maximum dose depth. To analyze the penumbra, we measured the dose profile with 95 cm of SSD, 5 cm of depth for each field sizes (${\Phi}1\;cm$, ${\Phi}3\;cm$, $1{\times}1\;cm^2$, and $3{\times}3\;cm^2$) using SFD. We obtained the values for every 1 mm interval in the physical field (90%) and 0.5 mm interval in the penumbra region (20 to 80%). The PDD variation of exclusive cones and square fields were 4.3 to 7.9% lesser than the standard field size ($10{\times}10\;cm^2$. The variation of PDD was reduced while the field size was increased. To compare the beam quality, we analyzed the $PDD_{20,10}$ and the results showed under the 1% of variations for all experiments except for ${\Phi}1\;cm$ cone and $1{\times}1\;cm^2$ fields. Output factors of exclusive cone were increased 3.1~4.6% than the square fields, and the penumbra region of exclusive cone was reduced 20% as compared to the square fields. As the previous researches report, it is very important for SRS and SFD that precise dosimetry in small beam fields. In this paper, we showed the effectiveness of exclusive cone, compared to square field. And we will study on the various detector characteristics for small beam fields.

The knowledge and human resources distribution system for university-industry cooperation (대학에서 창출하는 지적/인적자원에 대한 기업연계 플랫폼: 인문사회계열을 중심으로)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.133-149
    • /
    • 2014
  • One of the main purposes of universities is to create new intellectual resources that will increase social values. These intellectual resources include academic research papers, lecture notes, patents, and creative ideas produced by both professors and students. However, intellectual resources in universities are often not distributed to the actual users or companies; and moreover, they are not even systematically being managed inside of the universities. Therefore, it is almost impossible for companies to access the knowledge created by university students and professors to utilize them. Thus, the current level of knowledge sharing between universities and industries are very low. This causes a great extravagant with high-quality intellectual and human resources, and it leads to quite an amount of social loss in the modern society. In the 21st century, the creative ideas are the key growth powers for many industries. Many of the globally leading companies such as Fedex, Dell, and Facebook have established their business models based on the innovative ideas created by university students in undergraduate courses. This indicates that the unconventional ideas from young generations can create new growth power for companies and immensely increase social values. Therefore, this paper suggests of a new platform for intellectual properties distribution with university-industry cooperation. The suggested platform distributes intellectual resources of universities to industries. This platform has following characteristics. First, it distributes not only the intellectual resources, but also the human resources associated with the knowledge. Second, it diversifies the types of compensation for utilizing the intellectual properties, which are beneficial for both the university students and companies. For example, it extends the conventional monetary rewards to non-monetary rewards such as influencing on the participating internship programs or job interviews. Third, it suggests of a new knowledge map based on the relationships between key words, so that the various types of intellectual properties can be searched efficiently. In order to design the system platform, we surveyed 120 potential users to obtain the system requirements. First, 50 university students and 30 professors in humanities and social sciences departments were surveyed. We sent queries on what types of intellectual resources they produce per year, how many intellectual resources they produce, if they are willing to distribute their intellectual properties to the industries, and what types of compensations they expect in returns. Secondly, 40 entrepreneurs were surveyed, who are potential consumers of the intellectual properties of universities. We sent queries on what types of intellectual resources they want, what types of compensations they are willing to provide in returns, and what are the main factors they considered to be important when searching for the intellectual properties. The implications of this survey are as follows. First, entrepreneurs are willing to utilize intellectual properties created by both professors and students. They are more interested in creative ideas in universities rather than the academic papers or educational class materials. Second, non-monetary rewards, such as participating internship program or job interview, can be the appropriate types of compensations to replace monetary rewards. The results of the survey showed that majority of the university students were willing to provide their intellectual properties without any monetary rewards to earn the industrial networks with companies. Also, the entrepreneurs were willing to provide non-monetary compensation and hoped to have networks with university students for recruiting. Thus, the non-monetary rewards are mutually beneficial for both sides. Thirdly, classifying intellectual resources of universities based on the academic areas are inappropriate for efficient searching. Also, the various types of intellectual resources cannot be categorized into one standard. This paper suggests of a new platform for the distribution of intellectual materials and human resources, with university-industry cooperation based on these survey results. The suggested platform contains the four major components such as knowledge schema, knowledge map, system interface, and GUI (Graphic User Interface), and it presents the overall system architecture.

Development of a Business Model for Korean Insurance Companies with the Analysis of Fiduciary Relationship Persistency Rate (신뢰관계 유지율 분석을 통한 보험회사의 비즈니스 모델 개발)

  • 최인수;홍복안
    • Journal of the Korea Society of Computer and Information
    • /
    • v.6 no.4
    • /
    • pp.188-205
    • /
    • 2001
  • Insurer's duty of declaration is based on reciprocity of principle of the highest good, and recently it is widely recognized in the British and American insurance circles. The conception of fiduciary relationship is no longer equity or the legal theory which is only confined to the nations with Anglo-American laws. Therefore, recognizing the fiduciary relationship as the essence of insurance contract, which is more closely related to public interest than any other fields. will serve an efficient measure to seek fair and reasonable relationship with contractor, and provide legal foundation which permits contractor to bring an action for damage against violation of insurer's duty of declaration. In the future, only when the fiduciary relationship is approved as the essence of insurance contract, the business performance and quality of insurance industry is expected to increase. Therefore, to keep well this fiduciary relationship, or increase the fiduciary relationship persistency rates seems to be the bottom line in the insurance industry. In this paper, we developed a fiduciary relationship maintenance ratio based on comparison by case, which is represented with usually maintained contract months to paid months, based on each contract of the basis point. In this paper we have developed a new business model seeking the maximum profit with low cost and high efficiency, management policy of putting its priority on its substantiality, as an improvement measure to break away from the vicious circle of high cost and low efficiency, and management policy of putting its priority on its external growth(expansion of market share).

  • PDF

Improved Method of License Plate Detection and Recognition using Synthetic Number Plate (인조 번호판을 이용한 자동차 번호인식 성능 향상 기법)

  • Chang, Il-Sik;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.453-462
    • /
    • 2021
  • A lot of license plate data is required for car number recognition. License plate data needs to be balanced from past license plates to the latest license plates. However, it is difficult to obtain data from the actual past license plate to the latest ones. In order to solve this problem, a license plate recognition study through deep learning is being conducted by creating a synthetic license plates. Since the synthetic data have differences from real data, and various data augmentation techniques are used to solve these problems. Existing data augmentation simply used methods such as brightness, rotation, affine transformation, blur, and noise. In this paper, we apply a style transformation method that transforms synthetic data into real-world data styles with data augmentation methods. In addition, real license plate data are noisy when it is captured from a distance and under the dark environment. If we simply recognize characters with input data, chances of misrecognition are high. To improve character recognition, in this paper, we applied the DeblurGANv2 method as a quality improvement method for character recognition, increasing the accuracy of license plate recognition. The method of deep learning for license plate detection and license plate number recognition used YOLO-V5. To determine the performance of the synthetic license plate data, we construct a test set by collecting our own secured license plates. License plate detection without style conversion recorded 0.614 mAP. As a result of applying the style transformation, we confirm that the license plate detection performance was improved by recording 0.679mAP. In addition, the successul detection rate without image enhancement was 0.872, and the detection rate was 0.915 after image enhancement, confirming that the performance improved.

A Study on Development and Prospects of Archival Finding Aids (기록 검색도구의 발전과 전망)

  • Seol, Moon-Won
    • The Korean Journal of Archival Studies
    • /
    • no.23
    • /
    • pp.3-43
    • /
    • 2010
  • Finding aids are tools which facilitate to locate and understand archives and records. Traditionally there are two types of archival finding aids: vertical and horizontal. Vertical finding aids such as inventories have multi-level descriptions based on provenance, while horizontal ones such as catalogs and index are tools to guide to the vertical finding aids based on the subject. In the web environment, traditional finding aids are evolving into more dynamic forms. Respecting the principles of provenance and original order, vertical finding aids are changing to multi-entity structures with development of ISAD(G), ISAAR(CPF) and ISDF as standards for describing each entity. However, vertical finding aids can be too difficult, complicated, and boring for many users, who are accustomed to the easy and exciting searching tools in the internet world. Complementing them, new types of finding aids are appearing to provide easy, interesting, and extensive access channels. This study investigates the development and limitation of vertical finding aids, and the recent trend of evolving new finding aids complementing the vertical ones. The study finds three new trends of finding aid development. They are (i) mixture, (ii) integration, and (iii) openness. In recent days, certain finding aids are mixed with stories and others provide integrated searches for the collections of various heritage institutions. There are cases for experimenting user participation in the development of finding aids using Web 2.0 applications. These new types of finding aids can also cause some problems such as decontextualised description and prejudices, especially in the case of mixed finding aids and quality control of user contributed annotations and comments. To solve these problems, the present paper suggests to strengthen the infrastructure of vertical finding aids and to connect them with various new ones and to facilitate interactions with users of finding aids. It is hoped that the present paper will provide impetus for archives including the National Archives of Korea to set up and evaluate the development strategies for archival finding aids.