• Title/Summary/Keyword: information processing scope

Search Result 174, Processing Time 0.022 seconds

Comparative Study of Exposure Assessment of Dust in Building Materials Enterprises Using ART and Monte Carlo

  • Wei Jiang;Zonghao Wu;Mengqi Zhang;Haoguang Zhang
    • Safety and Health at Work
    • /
    • v.15 no.1
    • /
    • pp.33-41
    • /
    • 2024
  • Background: Dust generated during the processing of building materials enterprises can pose a serious health risk. The study aimed to compare and analyze the results of ART and the Monte Carlo model for the dust exposure assessment in building materials enterprises, to derive the application scope of the two models. Methods: First, ART and the Monte Carlo model were used to assess the exposure to dust in each of the 15 building materials enterprises. Then, a comparative analysis of the exposure assessment results was conducted. Finally, the model factors were analyzed using correlation analysis and the scope of application of the models was determined. Results: The results show that ART is mainly influenced by four factors, namely, localized controls, segregation, dispersion, surface contamination, and fugitive emissions, and applies to scenarios where the workplace information of the building materials enterprises is specific and the average dust concentration is greater than or equal to 1.5 mg/m3. The Monte Carlo model is mainly influenced by the dust concentration in the workplace of building materials enterprises and is suitable for scenarios where the dust concentration in the workplace of the building materials enterprises is relatively uniform and the average dust concentration is less than or equal to 6mg/m3. Conclusion: ART is most accurate when workplace information is specific and average dust concentration is > 1.5 mg/m3; whereas, The Monte Carlo model is the best when dust concentration is homogeneous and average dust concentration is < 6 mg/m3.

Cyber Kill Chain-Based Taxonomy of Advanced Persistent Threat Actors: Analogy of Tactics, Techniques, and Procedures

  • Bahrami, Pooneh Nikkhah;Dehghantanha, Ali;Dargahi, Tooska;Parizi, Reza M.;Choo, Kim-Kwang Raymond;Javadi, Hamid H.S.
    • Journal of Information Processing Systems
    • /
    • v.15 no.4
    • /
    • pp.865-889
    • /
    • 2019
  • The need for cyber resilience is increasingly important in our technology-dependent society where computing devices and data have been, and will continue to be, the target of cyber-attackers, particularly advanced persistent threat (APT) and nation-state/sponsored actors. APT and nation-state/sponsored actors tend to be more sophisticated, having access to significantly more resources and time to facilitate their attacks, which in most cases are not financially driven (unlike typical cyber-criminals). For example, such threat actors often utilize a broad range of attack vectors, cyber and/or physical, and constantly evolve their attack tactics. Thus, having up-to-date and detailed information of APT's tactics, techniques, and procedures (TTPs) facilitates the design of effective defense strategies as the focus of this paper. Specifically, we posit the importance of taxonomies in categorizing cyber-attacks. Note, however, that existing information about APT attack campaigns is fragmented across practitioner, government (including intelligence/classified), and academic publications, and existing taxonomies generally have a narrow scope (e.g., to a limited number of APT campaigns). Therefore, in this paper, we leverage the Cyber Kill Chain (CKC) model to "decompose" any complex attack and identify the relevant characteristics of such attacks. We then comprehensively analyze more than 40 APT campaigns disclosed before 2018 to build our taxonomy. Such taxonomy can facilitate incident response and cyber threat hunting by aiding in understanding of the potential attacks to organizations as well as which attacks may surface. In addition, the taxonomy can allow national security and intelligence agencies and businesses to share their analysis of ongoing, sensitive APT campaigns without the need to disclose detailed information about the campaigns. It can also notify future security policies and mitigation strategy formulation.

Overlay Multicast using Geographic Information in MANET (MANET에서의 지리 정보를 이용한 오버레이 멀티캐스트)

  • Lim, Yu-Jin;Ahn, Sang-Hyun
    • The KIPS Transactions:PartC
    • /
    • v.14C no.4
    • /
    • pp.359-364
    • /
    • 2007
  • Current researches on the overlay multicast mechanism in the mobile ad hoc network (MANET) maintain the network topology information of the dynamically changing MANET, which may cause severe overhead. In this paper, we propose a new overlay multicast mechanism, the region-based overlay multicast in MANET(ROME), using the geometric locations of group members. In ROME, the physical topology is divided into small regions and the scope of location updates of group members is limited to a single region. ROME provides scalability by using the coordinate of the center point of a destination region as the destination of a data packet instead of the list of member addresses of that region. Our simulation results show that ROME gives better performance in terms of the packet overhead than other schemes.

DEVELOPMENT OF AUGMENTED 3D STEREO URBAN CITY MODELLING SYSTEM BASED ON ANAGLYPH APPROACH

  • Kim, Hak-Hoon;Kim, Seung-Yub;Lee, Ki-Won
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.98-101
    • /
    • 2006
  • In general, stereo images are widely used to remote sensing or photogrametric applications for the purpose of image understanding and feature extraction or cognition. However, the most cases of these stereo-based application deal with 2-D satellite images or the airborne photos so that its main targets are generation of small-scaled or large-scaled DEM(Digital Elevation Model) or DSM(Digital Surface Model), in the 2.5-D. Contrast to these previous approaches, the scope of this study is to investigate 3-D stereo processing and visualization of true geo-referenced 3-D features based on anaglyph technique, and the aim is at the prototype development for stereo visualization system of complex typed 3-D GIS features. As for complex typed 3-D features, the various kinds of urban landscape components are taken into account with their geometric characteristics and attributes. The main functions in this prototype are composed of 3-D feature authoring and modeling along with database schema, stereo matching, and volumetric visualization. Using these functions, several technical aspects for migration into actual 3-D GIS application are provided with experiment results. It is concluded that this result will contribute to more specialized and realistic applications by linking 3-D graphics with geo-spatial information.

  • PDF

Bibliometric Analysis on the Research Trends in Journal of Convergence for Information Technology (중소기업융합학회 수록 논문의 연구동향에 대한 계량서지학적 분석)

  • Kim, Shin-Hee
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.7
    • /
    • pp.122-130
    • /
    • 2020
  • The purpose of this study is to identify current states and trends of convergence researches by conducting a bibliometric analysis of the papers in the Convergence Society for SMB. After 792 papers from 2012 to 2019 were collected, we analyzed network analysis by using centrality through extracting of nouns and compound nouns with text mining and 3 times of pre-processing and purification processing. According to the results, first, quantitative and qualitative aspects of the researches since 2016 improved, studies focused on words such as influence, convergence, mediating effect, satisfaction, and effect. Second, in the first half (2012-2015), engineering and technical studies were intensively conducted based on topics such as information, system, and security. Third, in the period of the second half (2016-2019), the research scope was expanded to college students, parents, teenagers under the topics of job, self-efficacy, education, satisfaction, depression, and stress. The results of this study are meaningful in identifying existing research trends and in providing information which requires the expansion of new research areas.

A study on the establishment and utilization of large-scale local spatial information using search drones (수색 드론을 활용한 대규모 지역 공간정보 구축 및 활용방안에 관한 연구)

  • Lee, Sang-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.1
    • /
    • pp.37-43
    • /
    • 2022
  • Drones, one of the 4th industrial technologies that are expanding from military use to industrial use, are being actively used in the search missions of the National Police Agency and finding missing persons, thereby reducing interest in a wide area and the input of large-scale search personnel. However, legal review of police drone operation is continuously required, and the importance of advanced system for related operations and analysis of captured images in connection with search techniques is increasing at the same time. In this study, in order to facilitate recording, preservation, and monitoring in the concept of precise search and monitoring, it is possible to achieve high efficiency and secure golden time when precise search is performed by constructing spatial information based on photo rather than image data-based search. Therefore, we intend to propose a spatial information construction technique that reduces the resulting data volume by adjusting the unnecessary spatial information completion rate according to the size of the subject. Through this, the scope of use of drone search missions for large-scale areas is advanced and it is intended to be used as basic data for building a drone operation manual for police searches.

Real-Time Moving Object Tracking System using Advanced Block Based Image Processing (개선된 블록기반 영상처리기법에 의한 실시간 이동물체 추적시스템)

  • Kim, Dohwan;Cheoi, Kyung-Joo;Lee, Yillbyung
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.333-349
    • /
    • 2005
  • In this paper, we propose a real tine moving object tracking system based on block-based image processing technique and human visual processing. The system has two nun features. First, to take advantage of the merit of the biological mechanism of human retina, the system has two cameras, a CCD(Charge-Coupled Device) camera equipped with wide angle lens for more wide scope vision and a Pan-Tilt-Zoon tamers. Second, the system divides the input image into a numbers of blocks and processes coarsely to reduce the rate of tracking error and the processing time. Tn an experiment, the system showed satisfactory performances coping with almost every noisy image, detecting moving objects very int and controlling the Pan-Tilt-Zoom camera precisely.

  • PDF

Pre-processing of load data of agricultural tractors during major field operations

  • Ryu, Myong-Jin;Kabir, Md. Shaha Nur;Choo, Youn-Kug;Chung, Sun-Ok;Kim, Yong-Joo;Ha, Jong-Kyou;Lee, Kyeong-Hwan
    • Korean Journal of Agricultural Science
    • /
    • v.42 no.1
    • /
    • pp.53-61
    • /
    • 2015
  • Development of highly efficient and energy-saving tractors has been one of the issues in agricultural machinery. For design of such tractors, measurement and analysis of load on major power transmission parts of the tractors are the most important pre-requisite tasks. Objective of this study was to perform pre-processing procedures before effective analysis of load data of agricultural tractors (30, 75, and 82 kW) during major field operations such as plow tillage, rotary tillage, baling, bale wrapping, and to select the suitable pre-processing method for the analysis. A load measurement systems, equipped in the tractors, were consisted of strain-gauge, encoder, hydraulic pressure, and radar speed sensors to measure torque and rotational speed levels of transmission input shaft, PTO shaft, and driving axle shafts, pressure of the hydraulic inlet line, and travel speed, respectively. The entire sensor data were collected at a 200-Hz rate. Plow tillage, rotary tillage, baling, wrapping, and loader operations were selected as major field operations of agricultural tractors. Same or different farm works and driving levels were set differently for each of the load measuring experiment. Before load data analysis, pre-processing procedures such as outlier removal, low-pass filtering, and data division were performed. Data beyond the scope of the measuring range of the sensors and the operating range of the power transmission parts were removed. Considering engine and PTO rotational speeds, frequency components greater than 90, 60, and 60 Hz cut off frequencies were low-pass filtered for plow tillage, rotary tillage, and baler operations, respectively. Measured load data were divided into five parts: driving, working, implement up, implement down, and turning. Results of the study would provide useful information for load characteristics of tractors on major field operations.

A Systematic Method for Analyzing Business Cases in Product Line Engineering (프로덕트 라인 공학의 체계적 비즈니스 케이스 분석 기법)

  • Park Shin-Young;Kim Soo-Dong
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.565-572
    • /
    • 2006
  • Product Line Engineering (PLE) is an effective reuse methodology where common features among members are captured into core assets and applications are developed by reusing the core assets, reducing development cost while increasing productivity. To maximize benefits in developing systems, business case analysis for PLE is essential. If the scope for core assets is excessively broad, it will result in high cost of asset development while lowering reusability. On the other hand, if the scope is too narrow, it will result in a limited applicability which only support a small number of members in the domain. In this paper, we propose a process for business case analysis for PLE and for deciding economical analysis of core asset scope. Then, we define guidelines for each activity of the process. Since variability often occurs in PLE, we significantly treat the variability of features among members in detailed level. By applying our framework for business case analysis, one can develop core assets of which scope provide the most economical value with applying PLE.

A Consistent Quality Bit Rate Control for the Line-Based Compression

  • Ham, Jung-Sik;Kim, Ho-Young;Lee, Seong-Won
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.5
    • /
    • pp.310-318
    • /
    • 2016
  • Emerging technologies such as the Internet of Things (IoT) and the Advanced Driver Assistant System (ADAS) often have image transmission functions with tough constraints, like low power and/or low delay, which require that they adopt line-based, low memory compression methods instead of existing frame-based image compression standards. Bit rate control in the conventional frame-based compression systems requires a lot of hardware resources when the scope of handled data falls at the frame level. On the other hand, attempts to reduce the heavy hardware resource requirement by focusing on line-level processing yield uneven image quality through the frame. In this paper, we propose a bit rate control that maintains consistency in image quality through the frame and improves the legibility of text regions. To find the line characteristics, the proposed bit rate control tests each line for ease of compression and the existence of text. Experiments on the proposed bit rate control show peak signal-to-noise ratios (PSNRs) similar to those of conventional bit rate controls, but with the use of significantly fewer hardware resources.