• Title/Summary/Keyword: Problems (types of)

Search Result 3,213, Processing Time 0.034 seconds

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

Development and Feasibility Study of the Nature of Science Instrument for Elementary School Students (초등학생용 과학의 본성 검사 도구 개발 및 타당성 검토)

  • Park, Jaehyeon;Park, Jaeyong
    • Journal of Korean Elementary Science Education
    • /
    • v.41 no.4
    • /
    • pp.701-724
    • /
    • 2022
  • In this study, the Nature of Science (NOS) instrument for elementary school students in the form of open questionnaires was developed specifically to reveal elementary school students' perceptions of the NOS, and its validity and effectiveness were investigated. To develop a NOS instrument for elementary school students, problems that may occur when applying the existing NOS instruments to elementary school students were analyzed and based on this, the development direction of the NOS instrument was established. In addition, after selecting seven NOS types suitable for the level of elementary school students, the preliminary instrument was produced by modifying and supplementing the items in the existing instruments for each type or by developing new items. Finally, the NOS instrument consisting of eight questions was developed by adding one question asking for a comprehensive understanding of science to seven questions related to each type of NOS after a content validity test of the science education expert group. To verify the practical effect of the developed instrument, pre- and post-tests were conducted on 50 students in two classes of sixth grade at two elementary schools in Seoul: 'existing instrument → development instrument' in one class, and 'development instrument → existing instrument' in the other class. The collected data were then compared and evaluated through summary content analysis and analyzed by executing the Wilcoxon signed-rank test. As a result of comparing and analyzing students' responses to the existing NOS instrument and the developed NOS instrument, students' perspectives on the NOS were more diverse when using the developed instrument, and the level of error in the response caused by misinterpreting the intention of the question was reduced. In addition, when using the developed instrument, the responses of the majority of students at a statistically significant level changed more specifically. In this study, the implications for the development of NOS instruments suitable for elementary school students were discussed based on these results.

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.

A Study on the Full-scale Soil Washing Process Improved by Multi-stage Continuous Desorption and Agitational Desorption Techniques to Remediate Petroleum-contaminated Soils (현장규모의 유류오염토양 세척공법에 다단연속탈착 및 교반탈착기법을 이용한 세척공정 성능향상에 관한 연구)

  • Seo, Yong-Sik;Choi, Sang-Il;Jang, Min
    • Journal of Soil and Groundwater Environment
    • /
    • v.13 no.5
    • /
    • pp.81-87
    • /
    • 2008
  • In accompany with the transfer of US army bases, recent surveys reported serious contamination of soils by the release of petroleum from storage facilities and heavy metals accumulated in rifle-ranges. These problems have made an increased concerns of cleanup technology for contaminated soils. In this study, a full-scale soil washing process improved by multistage continuous desorption and agitational desorption techniques was examined for petroleum-contaminated soils obtained from three different remedial sites that contained 29.3, 16.6, and 7.8% of silt and clay, respectively. The initial concentrations of total petroleum hydrocarbon (TPH) were 5,183, 2,560, and 4,860 mg/kg for each soil. Pure water was applied to operate washing process, in which water used for washing process was recycled 100% for over 6 months. The results of full-scale washing tests showed that the TPH concentrations for soils (> 3.0 mm) were 50${\sim}$356 mg/kg (85.2${\sim}$98.2% removal rates), regardless of the contents of silt and clay from in A, B and C soil, when the soils were washed at 3.0 kg/$cm^2$ of injection pressure with the method of wet particle separation. Based on the initial TPH concentration, the TPH removal rates for each site were 85.2, 98.2 and 89.9%. For soils in the range of 3.0${\sim}$0.075 mm, the application of first-stage desorption technique as a physical method resulted 834, 1,110, and 1,460 mg/kg of TPH concentrations for each soil, also additional multi-stage continuous desorption reduced the TPH concentration to 330, 385, and 245 mg/kg that were equivalent to 92.4, 90.6, and 90.1% removal rates, respectively. The result of multi-stage continuous desorption for fine soil (0.075${\sim}$0.053 mm) were 791, 885, and 1,560 mg/kg, and additional agitation desorption showed 428, 440, and, 358 mg/kg of TPH concentrations. Compared with initial concentration, the removal rates were 92.0, 93.9 and 92.9%, respectively. These results implied we could apply strategic process of soil washing for varies types of contaminated soils to meet the regulatory limit of TPH.

The Diagnosis of Work Connectivity between Local Government Departments -Focused on Busan Metropolitan City IT Project - (지자체 부서 간 업무연계성 진단 -부산광역시 정보화사업을 중심으로 -)

  • JI, Sang-Tae;NAM, Kwang-Woo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.3
    • /
    • pp.176-188
    • /
    • 2018
  • Modern urban problems are increasingly becoming a market mix that can not be solved by the power of a single department and the necessity of establishing a cooperation system based on data communication between departments is increasing. Therefore, this study analyzed Busan metropolitan city's IT projects from 2014 to 2018 in order to understand the utilization and sharing status of departmental data from the viewpoint that cooperation between departments can start from the sharing of data with high common utilization. In addition, based on the results of the FGI(Focus Group Interview) conducted for the officials of the department responsible for the informatization project, we verified the results of data status analysis. At the same time, we figured out the necessity of data link between departments through SNA(Social Network Analysis) and presented data that should be shared first in the future. As a result, most of the information systems currently use limited data only within the department that produced the data. Most of the linked data was concentrated in the information department. Therefore, this study suggested the following solutions. First, in order to prevent overlapping investments caused by the operation of individual departments and share information, it is necessary to build a small platform to tie the departments, which have high connectivity with each other, into small blocks. Second, a local level process is needed to develop data standards as an extension of national standards in order to expand the information to be used in various fields. Third, as another solution, we proposed a system that can integrate various types of information based on address and location information through application of cloud-based GIS platform. The results of this study are expected to contribute to build a cooperation system between departments through expansion of information sharing with cost reduction.

Characteristics of Intrusion MO and Perception of Target Hardening of Burglars (침입절도범 재소자의 수법 특성과 타겟하드닝 관련 인식)

  • Park, Hyeonho;Kim, Kang-Il;Kim, Hyo-gun
    • Korean Security Journal
    • /
    • no.60
    • /
    • pp.33-61
    • /
    • 2019
  • It is quite difficult to actually prove the effectiveness of so-called target-hardening, one of the various strategies used to reduce crime, one of the serious problems in society recently. In particular, three to five minutes is often used as golden time for intruders to give up or stop, which is based on foreign and some indirect research cases in Korea, but there were no studies that more directly identified the average break-in operation time or the abandonment time based on the elapsed time when the shield hardware resists intruders. This study was the first of its kind in Korea to investigate and verify samples of 90 inmates of break-in burglars who were imprisoned in August 2018 by profiling the average criminal experience, education level, age, height and weight of typical Korean professional break-in thieves, and specific criminal methods, average break-in operation time, and the criteria for giving up if not breached. According to the analysis results, in the survey on the number of pre-invasion theft crimes by intruders, many of the respondents who participated in the survey were criminals of professional invasions, and by their physical characteristics, there was not much difference from ordinary adult men. Residential facilities were the highest in the world, followed by commercial and educational facilities. According to the survey on the types of facilities that committed intrusion into residential facilities, it was not safe to say that single-family housing accounted for the largest portion of single-family housing, multi-family housing, apartment high-rise (more than three stories), and apartment low-rise (more than one to three stories) among residential facilities, and that the ratio of apartment high-rise was higher than expected. Based on the average time required to break into a place for an intrusion crime, it is assumed that the psychological time worked in a place where the break-in was difficult, since the break-in was not performed while measuring the time of the break-in operation. In the case of time to give up a crime, more than half of the respondents said they would give up the crime even in less than four minutes, suggesting that a significant number of intrusive crimes can be prevented even if the facility has four minutes of intrusion resistance. This proves that most intruders will give up the break-in if the break-in resistance performance of the security facility is exercised for more than five minutes.

An Estimation of the Minimum Distance Between a Roundabout and Signal Crosswalk Using VISSIM (VISSIM분석을 통한 회전교차로 인접 신호횡단보도의 최소이격거리 산정)

  • KIM, Young Beom;LEE, Dongmin;Jun, Jin Woo;Cho, Hanseon
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.4
    • /
    • pp.337-347
    • /
    • 2015
  • Since the application of roundabouts by 2010 have been started, more than 350 roundabouts were installed in Korea. Recently the types of constructed roundabouts become various, and the intersection conditions for installing roundabouts were also various. However, there were some difficulties to install roundabouts around school zone due to safety problems. In this study, appropriate distance from adjacent signal crosswalks to roundabouts were estimated for securing pedestrian safety and operation efficiency around school zone. With the analyses, the minimum distance standard was suggested to obtain operational effectiveness of roundabout according to traffic volume, traffic flow, pedestrian green time and secures pedestrian safety and convenience. In this paper, average delay of roundabout as various length of distances between an adjacent crosswalk and a roundabout as different pedestrian signal times, traffic volumes, traffic flow rates were analyzed. Through this study, it was found that about four times of delay in a roundabout was generated if there was adjacent signal crosswalk. However if there is enough distance between an adjacent crosswalk and a roundabout, the value of increasing delay on roundabouts with adjacent a signalized crosswalk can be considerably reduced. Critical value of the distance between a roundabout and a signal crosswalk in case of roundabouts within 200-500 vehicle/hour/lane entry traffic flow, 20-40% of left turn traffic, and over 15 seconds pedestrian green time was about 50 meters. In conclusion, if there is minimum 40 meter distance from roundabouts, adjacent signal crosswalks can be installed and operated for students' safety around school zone.

Selectively Partial Encryption of Images in Wavelet Domain (웨이블릿 영역에서의 선택적 부분 영상 암호화)

  • ;Dujit Dey
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.6C
    • /
    • pp.648-658
    • /
    • 2003
  • As the usage of image/video contents increase, a security problem for the payed image data or the ones requiring confidentiality is raised. This paper proposed an image encryption methodology to hide the image information. The target data of it is the result from quantization in wavelet domain. This method encrypts only part of the image data rather than the whole data of the original image, in which three types of data selection methodologies were involved. First, by using the fact that the wavelet transform decomposes the original image into frequency sub-bands, only some of the frequency sub-bands were included in encryption to make the resulting image unrecognizable. In the data to represent each pixel, only MSBs were taken for encryption. Finally, pixels to be encrypted in a specific sub-band were selected randomly by using LFSR(Linear Feedback Shift Register). Part of the key for encryption was used for the seed value of LFSR and in selecting the parallel output bits of the LFSR for random selection so that the strength of encryption algorithm increased. The experiments have been performed with the proposed methods implemented in software for about 500 images, from which the result showed that only about 1/1000 amount of data to the original image can obtain the encryption effect not to recognize the original image. Consequently, we are sure that the proposed are efficient image encryption methods to acquire the high encryption effect with small amount of encryption. Also, in this paper, several encryption scheme according to the selection of the sub-bands and the number of bits from LFSR outputs for pixel selection have been proposed, and it has been shown that there exits a relation of trade-off between the execution time and the effect of the encryption. It means that the proposed methods can be selectively used according to the application areas. Also, because the proposed methods are performed in the application layer, they are expected to be a good solution for the end-to-end security problem, which is appearing as one of the important problems in the networks with both wired and wireless sections.

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

Recommender Systems using Structural Hole and Collaborative Filtering (구조적 공백과 협업필터링을 이용한 추천시스템)

  • Kim, Mingun;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.107-120
    • /
    • 2014
  • This study proposes a novel recommender system using the structural hole analysis to reflect qualitative and emotional information in recommendation process. Although collaborative filtering (CF) is known as the most popular recommendation algorithm, it has some limitations including scalability and sparsity problems. The scalability problem arises when the volume of users and items become quite large. It means that CF cannot scale up due to large computation time for finding neighbors from the user-item matrix as the number of users and items increases in real-world e-commerce sites. Sparsity is a common problem of most recommender systems due to the fact that users generally evaluate only a small portion of the whole items. In addition, the cold-start problem is the special case of the sparsity problem when users or items newly added to the system with no ratings at all. When the user's preference evaluation data is sparse, two users or items are unlikely to have common ratings, and finally, CF will predict ratings using a very limited number of similar users. Moreover, it may produces biased recommendations because similarity weights may be estimated using only a small portion of rating data. In this study, we suggest a novel limitation of the conventional CF. The limitation is that CF does not consider qualitative and emotional information about users in the recommendation process because it only utilizes user's preference scores of the user-item matrix. To address this novel limitation, this study proposes cluster-indexing CF model with the structural hole analysis for recommendations. In general, the structural hole means a location which connects two separate actors without any redundant connections in the network. The actor who occupies the structural hole can easily access to non-redundant, various and fresh information. Therefore, the actor who occupies the structural hole may be a important person in the focal network and he or she may be the representative person in the focal subgroup in the network. Thus, his or her characteristics may represent the general characteristics of the users in the focal subgroup. In this sense, we can distinguish friends and strangers of the focal user utilizing the structural hole analysis. This study uses the structural hole analysis to select structural holes in subgroups as an initial seeds for a cluster analysis. First, we gather data about users' preference ratings for items and their social network information. For gathering research data, we develop a data collection system. Then, we perform structural hole analysis and find structural holes of social network. Next, we use these structural holes as cluster centroids for the clustering algorithm. Finally, this study makes recommendations using CF within user's cluster, and compare the recommendation performances of comparative models. For implementing experiments of the proposed model, we composite the experimental results from two experiments. The first experiment is the structural hole analysis. For the first one, this study employs a software package for the analysis of social network data - UCINET version 6. The second one is for performing modified clustering, and CF using the result of the cluster analysis. We develop an experimental system using VBA (Visual Basic for Application) of Microsoft Excel 2007 for the second one. This study designs to analyzing clustering based on a novel similarity measure - Pearson correlation between user preference rating vectors for the modified clustering experiment. In addition, this study uses 'all-but-one' approach for the CF experiment. In order to validate the effectiveness of our proposed model, we apply three comparative types of CF models to the same dataset. The experimental results show that the proposed model outperforms the other comparative models. In especial, the proposed model significantly performs better than two comparative modes with the cluster analysis from the statistical significance test. However, the difference between the proposed model and the naive model does not have statistical significance.