• Title/Summary/Keyword: location problem

Search Result 1,854, Processing Time 0.032 seconds

Estimating the Dimension of a Crosswalk in Urban Area - Focusing on Width and Stop Line - (도시부 횡단보도 제원 산정에 관한 연구 - 폭과 정지선을 중심으로 -)

  • Kim, Yoomi;Park, Jejin;Kwon, Sungdae;Ha, Taejun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.36 no.5
    • /
    • pp.847-856
    • /
    • 2016
  • Recently, with a high level of economic growth, rapid urbanization, population, environment and housing problems were accompanied in Korea. In particular, the traffic problem has become a serious social problem. As the current transportation policy has been carried out, concentrating on traffic flow, in 2015, death rate for pedestrians while walking (1,795 persons) is 38.8% compared to entire death rate in car accident (4,621 persons), so there is need to solve it. Although, crosswalk should make pedestrian cross it safely, it has been made on the basis of the width of road without exact standard for current width of the crosswalk and the location of stop line. Moreover, in the area around many campuses or commercial facilities, crosswalks are set with not considering pedestrian passage, but designed uniformly. Therefore, the purpose of this study is to estimate reasonable dimension of crosswalk considering pedestrian traffic and walking speed and it makes the accident rate lower in the crosswalk, which has a lot of problems including decisions of vehicle traffic signal time, lack of pedestrian's signal timing, pedestrian's crossing of long-distance. The following are the methodology of the study. Firstly, for crosswalk calculation of specifications, examination relating existing regulations and researches dealing with crosswalk, pedestrians and stop line is needed. After analyzing problems of current width of crosswalk and stop line, present the methodology to calculation of specifications and basing on these things, calculation of specifications for crosswalk will be decided. In conclusion, the calculation of specification and improvement of stop line for crosswalk laid out in this study are expected to be utilized as base data in case of establishing relevant safety facilities and standards.

The Effect of Local Condition on the Development at Dairy Farming (지역적(地域的) 입지조건(立地條件)이 낙농경영전개(酪農經營展開)에 미친 영향(影響))

  • Lee, I.H.;Chai, Y.S.
    • Korean Journal of Agricultural Science
    • /
    • v.1 no.1
    • /
    • pp.59-66
    • /
    • 1974
  • This paper describes how location influences on the development of dairy farming. It compares Chuncheon, which in the mountains, Daejeon, which is a transportation center and on flat ground, and Incheon, which is the gate to Seoul and an industrial city. The results analyzed are summarized as follows : 1. Incheon, due to her vast market, influenced strongly on the development of management. But shortage of roughage for feeds is the critical problem there. 2. Owing to the side job of raising chicks, which involves 92% of dairy farmers there, Chuncheon does not make smooth utilization of vast grass pasture. 3. In Daejeon's case, running orchards concurrently is the problem. 4. There exists no gap in wages between the region. The labor supply is most affluent in Incheon and there is competition with other forms of agriculture in the mountainous area. 5. The full-time employee tends to be skillful in accordance with his career. Family labor has been skillful, but the full-time employee is very fluid and shows varieties of skill. 6. Because of the obscurity of the distribution of labor with other jobs in Chuncheon and Daejeon, many unspecialized men serve. 7. The prices of milk are different in each region. The income of dairy farmers is strongly influenced by the low purchasing price of milk plants, and this is one of the important factors hindering the development of dairy farm ing in Chuncheon.

  • PDF

Personalized Exhibition Booth Recommendation Methodology Using Sequential Association Rule (순차 연관 규칙을 이용한 개인화된 전시 부스 추천 방법)

  • Moon, Hyun-Sil;Jung, Min-Kyu;Kim, Jae-Kyeong;Kim, Hyea-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.195-211
    • /
    • 2010
  • An exhibition is defined as market events for specific duration to present exhibitors' main product range to either business or private visitors, and it also plays a key role as effective marketing channels. Especially, as the effect of the opinions of the visitors after the exhibition impacts directly on sales or the image of companies, exhibition organizers must consider various needs of visitors. To meet needs of visitors, ubiquitous technologies have been applied in some exhibitions. However, despite of the development of the ubiquitous technologies, their services cannot always reflect visitors' preferences as they only generate information when visitors request. As a result, they have reached their limit to meet needs of visitors, which consequently might lead them to loss of marketing opportunity. Recommendation systems can be the right type to overcome these limitations. They can recommend the booths to coincide with visitors' preferences, so that they help visitors who are in difficulty for choices in exhibition environment. One of the most successful and widely used technologies for building recommender systems is called Collaborative Filtering. Traditional recommender systems, however, only use neighbors' evaluations or behaviors for a personalized prediction. Therefore, they can not reflect visitors' dynamic preference, and also lack of accuracy in exhibition environment. Although there is much useful information to infer visitors' preference in ubiquitous environment (e.g., visitors' current location, booth visit path, and so on), they use only limited information for recommendation. In this study, we propose a booth recommendation methodology using Sequential Association Rule which considers the sequence of visiting. Recent studies of Sequential Association Rule use the constraints to improve the performance. However, since traditional Sequential Association Rule considers the whole rules to recommendation, they have a scalability problem when they are adapted to a large exhibition scale. To solve this problem, our methodology composes the confidence database before recommendation process. To compose the confidence database, we first search preceding rules which have the frequency above threshold. Next, we compute the confidences of each preceding rules to each booth which is not contained in preceding rules. Therefore, the confidence database has two kinds of information which are preceding rules and their confidence to each booth. In recommendation process, we just generate preceding rules of the target visitors based on the records of the visits, and recommend booths according to the confidence database. Throughout these steps, we expect reduction of time spent on recommendation process. To evaluate proposed methodology, we use real booth visit records which are collected by RFID technology in IT exhibition. Booth visit records also contain the visit sequence of each visitor. We compare the performance of proposed methodology with traditional Collaborative Filtering system. As a result, our proposed methodology generally shows higher performance than traditional Collaborative Filtering. We can also see some features of it in experimental results. First, it shows the highest performance at one booth recommendation. It detects preceding rules with some portions of visitors. Therefore, if there is a visitor who moved with very a different pattern compared to the whole visitors, it cannot give a correct recommendation for him/her even though we increase the number of recommendation. Trained by the whole visitors, it cannot correctly give recommendation to visitors who have a unique path. Second, the performance of general recommendation systems increase as time expands. However, our methodology shows higher performance with limited information like one or two time periods. Therefore, not only can it recommend even if there is not much information of the target visitors' booth visit records, but also it uses only small amount of information in recommendation process. We expect that it can give real?time recommendations in exhibition environment. Overall, our methodology shows higher performance ability than traditional Collaborative Filtering systems, we expect it could be applied in booth recommendation system to satisfy visitors in exhibition environment.

Extraction of Landmarks Using Building Attribute Data for Pedestrian Navigation Service (보행자 내비게이션 서비스를 위한 건물 속성정보를 이용한 랜드마크 추출)

  • Kim, Jinhyeong;Kim, Jiyoung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.37 no.1
    • /
    • pp.203-215
    • /
    • 2017
  • Recently, interest in Pedestrian Navigation Service (PNS) is being increased due to the diffusion of smart phone and the improvement of location determination technology and it is efficient to use landmarks in route guidance for pedestrians due to the characteristics of pedestrians' movement and success rate of path finding. Accordingly, researches on extracting landmarks have been progressed. However, preceding researches have a limit that they only considered the difference between buildings and did not consider visual attention of maps in display of PNS. This study improves this problem by defining building attributes as local variable and global variable. Local variables reflect the saliency of buildings by representing the difference between buildings and global variables reflects the visual attention by representing the inherent characteristics of buildings. Also, this study considers the connectivity of network and solves the overlapping problem of landmark candidate groups by network voronoi diagram. To extract landmarks, we defined building attribute data based on preceding researches. Next, we selected a choice point for pedestrians in pedestrian network data, and determined landmark candidate groups at each choice point. Building attribute data were calculated in the extracted landmark candidate groups and finally landmarks were extracted by principal component analysis. We applied the proposed method to a part of Gwanak-gu, Seoul and this study evaluated the extracted landmarks by making a comparison with labels and landmarks used by portal sites such as the NAVER and the DAUM. In conclusion, 132 landmarks (60.3%) among 219 landmarks of the NAVER and the DAUM were extracted by the proposed method and we confirmed that 228 landmarks which there are not labels or landmarks in the NAVER and the DAUM were helpful to determine a change of direction in path finding of local level.

The consideration about exact set-up with stereotactic radiosurgery for lung cancer. (폐암 환자의 전신 정위적 방사선 수술시 정확한 SET UP에 대한 고찰)

  • Seo, Dong-Rin;Hong, Dong-Gi;Kwon, Kyung-Tea;Park, Kwang-Ho;Kim, Jung-Man
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.16 no.2
    • /
    • pp.1-8
    • /
    • 2004
  • Purpose : What confirm a patient's set-up precisely is an important factor in stereotactic radiosurgery Especially, the tumor is moved by respiration in case of lung cancer. So it is difficult to confirm a exact location by L-gram or EPID. I will verify a exact patient's set-up about this sort of problem by verification system(exactrac 3.0) Materials and Methods : The patient that had lung cancer operated on stereotactic radiosurgery is composed of 6 people. The 5 patients use an ABC tool and 1 patient doesn't use it. I got such a patient's L-gram and EPID image by Body frame(elekta, sweden), compared Ant. image with Lat. one, and then confirmed a set-up. I fused DRR image of CT and X-ray image of Verification system(exactrac 3.0) 3 dimensional, analyzed the coordinate value(vertical, longitudinal, lateral), and then confirmed a difference of existing method. Results : In case of L-gram and EPID, we judge an exact of the patient's set-up subjectively, and on we could treat the patient with radiation. As a result of using Verification system(exactrac 3.0), coordinate value(vertical, longitudinal, lateral) of patient's set-up was comprised within 5mm. We could estimate a difference of the coordinate value visually and objectively. Consequently, Verification system(exactrac 3.0) was useful in judging an exact patient's set-up. Conclusion : In case of Verification system(exactrac 3.0), we can confirm an exact patient's set-up at any time whenever, However, there are several kinds of the demerit. First, it is a complex process of confirmation than the existing process. Second, thickness of CT scan slice is within 3mm. The last, X-ray image has to have shown itself clearly. If we solve this problem. stereotactic radiosurgery will be useful in treating patients why we can confirm an exact patient's positioning easily.

  • PDF

Design and Implementation of Quality Broker Architecture to Web Service Selection based on Autonomic Feedback (자율적 피드백 기반 웹 서비스 선정을 위한 품질 브로커 아키텍처의 설계 및 구현)

  • Seo, Young-Jun;Song, Young-Jae
    • The KIPS Transactions:PartD
    • /
    • v.15D no.2
    • /
    • pp.223-234
    • /
    • 2008
  • Recently the web service area provides the efficient integrated environment of the internal and external of corporation and enterprise that wants the introduction of it is increasing. Also the web service develops and the new business model appears, the domestic enterprise environment and e-business environment are changing caused by web service. The web service which provides the similar function increases, most the method which searches the suitable service in demand of the user is more considered seriously. When it needs to choose one among the similar web services, service consumer generally needs quality information of web service. The problem, however, is that the advertised QoS information of a web service is not always trustworthy. A service provider may publish inaccurate QoS information to attract more customers, or the published QoS information may be out of date. Allowing current customers to rate the QoS they receive from a web service, and making these ratings public, can provide new customers with valuable information on how to rank services. This paper suggests the agent-based quality broker architecture which helps to find a service providing the optimum quality that the consumer needs in a position of service consumer. It is able to solve problem which modify quality requirements of the consumer from providing the architecture it selects a web service to consumer dynamically. Namely, the consumer is able to search the service which provides the optimal quality criteria through UDDI browser which is connected in quality broker server. To quality criteria value decision of each service the user intervention is excluded the maximum. In the existing selection architecture, the objective evaluation was difficult in subjective class of service selecting of the consumer. But the proposal architecture is able to secure an objectivity with the quality criteria value decision where the agent monitors binding information in consumer location. Namely, it solves QoS information of service which provider does not provide with QoS information sharing which is caused by with feedback of consumer side agents.

The Definition of Outer Space and the Air/Outer Space Boundary Question (우주의 법적 지위와 경계획정 문제)

  • Lee, Young-Jin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.2
    • /
    • pp.427-468
    • /
    • 2015
  • To date, we have considered the theoretical views, the standpoint of states and the discourse within the international community such as the UN Committee on the Peaceful Uses of Outer Space(COPUOS) regarding the Air/Outer Space Boundary Question which is one of the first issues of UN COPUOS established in line with marking the starting point of Outer Space Area. As above mentioned, discussions in the United Nations and among scholars of within each state regarding the delimitation issue often saw a division between those in favor of a functional approach (the functionalists) and those seeking the delineation of a boundary (the spatialists). The spatialists emphasize that the boundary between air and outer space should be delimited because the status of outer space is a type of public domain from which sovereign jurisdiction is excluded, as stated in Article 2 of Outer Space Treaty. On the contrary art. I of Chicago Convention is evidence of the acknowledgement of sovereignty over airspace existing as an international customary law, has the binding force of which exists independently of the Convention. The functionalists, backed initially by the major space powers, which viewed any boundary demarcation as possibly restricting their access to space, whether for peaceful or non-military purposes, considered it insufficient or inadequate to delimit a boundary of outer space without obvious scientific and technological evidences. Last more than 50 years there were large development in the exploration and use of outer space. But a large number states including those taking the view of a functionalist have taken on a negative attitude. As the element of location is a decisive factor for the choice of the legal regime to be applied, a purely functional approach to the regulation of activities in the space above the Earth does not offer a solution. It seems therefore to welcome the arrival of clear evidence of a growing recognition of and national practices concerning a spatial approach to the problem is gaining support both by a large number of States as well as by publicists. The search for a solution to the problem of demarcating the two different legal regimes governing the space above Earth has undoubtedly been facilitated and a number of countries including Russia have already advocated the acceptance of the lowest perigee boundary of outer space at a height of 100km. As a matter of fact the lowest perigee where space objects are still able to continue in their orbiting around the earth has already been imposed as a natural criterion for the delimitation of outer space. This delimitation of outer space has also been evidenced by the constant practice of a large number of States and their tacit consent to space activities accomplished so far at this distance and beyond it. Of course there are still numerous opposing views on the delineation of a outer space boundary by space powers like U.S.A., England, France and so on. Therefore, first of all to solve the legal issues faced by the international community in outer space activities like delimitation problem, there needs a positive and peaceful will of international cooperation. From this viewpoint, President John F. Kennedy once described the rationale behind the outer space activities in his famous "Moon speech" given at Rice University in 1962. He called upon Americans and all mankind to strive for peaceful cooperation and coexistence in our future outer space activities. And Kennedy explained, "There is no strife, ${\ldots}$ nor any international conflict in outer space as yet. But its hazards are hostile to us all: Its conquest deserves the best of all mankind, and its opportunity for peaceful cooperation may never come again." This speech seems to even present us in the contemporary era with ample suggestions for further peaceful cooperation in outer space activities including the delimitation of outer space.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

The Study on the Influence of Capstone Design & Field Training on Employment Rate: Focused on Leaders in INdustry-university Cooperation(LINC) (캡스톤디자인 및 현장실습이 취업률에 미치는 영향: 산학협력선도대학(LINC)을 중심으로)

  • Park Namgue
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.4
    • /
    • pp.207-222
    • /
    • 2023
  • In order to improve employment rates, most universities operate programs to strengthen students' employment and entrepreneurship, regardless of whether they are selected as the Leading Industry-Innovative University (LINC) or not. In particular, in the case of non-metropolitan universities are risking their lives to improve employment rates. In order to overcome the limitations of university establishment type and university location, which absolutely affect the employment rate, we are operating a startup education & startup support program in order to strengthen employment and entrepreneurship, and capstone design & field training as industry-academia-linked education programs are always available. Although there are studies on effectiveness verification centered on LINC (Leaders in Industry-University Cooperation) in previous studies, but a longitudinal study was conducted on all factors of university factors, startup education & startup support, and capstone design & field training as industry-university-linked education programs as factors affecting the employment rate based on public disclosure indicators. No cases of longitudinal studies were reported. This study targets 116 universities that satisfy the conditions based on university disclosure indicators from 2018 to 2020 that were recently released on university factors, startup education & startup support, and capstone design & field training as industry-academia-linked education programs as factors affecting the employment rate. We analyzed the differences between the LINC (Leaders in Industry-University Cooperation) 51 participating universities and 64 non-participating universities. In addition, considering that there is no historical information on the overlapping participation of participating students due to the limitations of public indicators, the Exposure Effect theory states that long-term exposure to employment and entrepreneurship competency enhancement programs will affect the employment rate through competency enhancement. Based on this, the effectiveness of the 2nd LINC+ (socially customized Leaders in Industry-University Cooperation) was verified from 2017 to 2021 through a longitudinal causal relationship analysis. As a result of the study, it was found that the startup education & startup support and capstone design & field training as industry-academia-linked education programs of the 2nd LINC+ (socially customized Leaders in Industry-University Cooperation) did not affect the employment rate. As a result of the longitudinal causal relationship analysis, it was reconfirmed that universities in metropolitan areas still have higher employment rates than universities in non-metropolitan areas due to existing university factors, and that private universities have higher employment rates than national universities. Among employment and entrepreneurship competency strengthening programs, the number of people who complete entrepreneurship courses, the number of people who complete capstone design, the amount of capstone design payment, and the number of dedicated faculty members partially affect the employment rate by year, while field training has no effect at all by year. It was confirmed that long-term exposure to the entrepreneurship capacity building program did not affect the employment rate. Therefore, it was reconfirmed that in order to improve the employment rate of universities, the limitations of non-metropolitan areas and national and public universities must be overcome. To overcome this, as a program to strengthen employment and entrepreneurship capabilities, it is important to strengthen entrepreneurship through participation in entrepreneurship lectures and actively introduce and be confident in the capstone design program that strengthens the concept of PBL (Problem Based Learning), and the field training program improves the employment rate. In order for actually field training affect of the employment rate, it is necessary to proceed with a substantial program through reorganization of the overall academic system and organization.

  • PDF

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.