DOI QR코드

DOI QR Code

Subjective Imaging Effect Assessment for Intelligent Imaging Terminal Design: a Method for Engineering Site

  • Liu, Haoting (Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing) ;
  • Lv, Ming (Department of Medical Engineering, The Third Medical Center of PLA General Hospital) ;
  • Yu, Weiqun (Department of Medical Engineering, The Third Medical Center of PLA General Hospital) ;
  • Guo, Zhenhui (Jiuquan Satellite Launch Center) ;
  • Li, Xin (Jiuquan Satellite Launch Center)
  • Received : 2019.04.01
  • Accepted : 2019.11.14
  • Published : 2020.03.31

Abstract

A kind of Subjective Imaging Effect Assessment (SIEA) method and its applications on intelligent imaging terminal design in engineering site are presented. First, some visual assessment indices are used to characterize the imaging effect: the image brightness, the image brightness uniformity, the color image contrast, the image edge blur, the image color difference, the image saturation, the image noise, and the integrated imaging effect index. A linear weighted function is employed to carry out the SIEA computation and the Analytic Hierarchy Process (AHP) technique is used to estimate its weights. Second, a SIEA software is developed. It can play images after the settings of assessment index or assessment reaction time, etc. Third, two cases are used to illustrate the application effects of proposed method: the image enhancement system design for surveillance camera and the imaging environment perception system design for intelligent lighting terminal. A Prior Sequential Stimulus (PSS) experiment is proposed to improve the evaluation stability of SIEA method. Many experiment results have shown the proposed method can realize a stable system design or parameters setting for the intelligent imaging terminal in engineering site.

Keywords

1. Introduction

With the rapid development of information technology, the intelligent imaging terminals [1] have been utilized almost everywhere in our daily life. The intelligent imaging terminal is a kind of display system, which can adaptively tune its output effect according to the environment change or the habit of system user. Because of different application environments, the output effects of intelligent imaging terminals have lots of varieties. Since human is the final audience of that terminal it is necessary to ask him or her to give some subjective assessments about the final imaging effect; that is the research target of the Subjective Imaging Effect Assessment (SIEA) for the intelligent imaging terminal design. The SIEA can provide a benchmark for the output effect assessment [2] of specific system; and it is also relevant to user’s working state, the application purposes, or other environmental factors [3] etc.; thus one of its research targets is how to design the robust assessment index to characterize the attributions of imaging effect; and another target is how to organize the assessment experiment with a proper cost.

In a broad sense, the study of SIEA for intelligent imaging terminal design is one of typical research problems of the subjective image quality assessment [4]. Many efforts have been done in this research field. For example, the International Telecommunication Union (ITU) has formulated lots of international standards to implement the subjective image quality assessment for some purposes [5-7]. And according to different application fields, the SIEA can be used in the medical imaging terminal research [8], the mobile device design [9], the computer science study [10], and the communication equipment development [11], etc. In terms of different research targets, it also can be used for the medical image [12], the facsimile image [13], the compressed image [14], the segmented image [15], the stereoscopic image [16], the omnidirectional image [17], and the remote image [18], etc. In recent years, some new techniques, such as the eye movement analysis [19] and the artificial neural network [20], begin to be used in the SIEA research field. However, the high application cost and the complex assessment procedure limit the application of these methods.

Comparing to other human-involved studies, the research and the application of SIEA for intelligent imaging terminal design in engineering site have its own characters. First, the design of SIEA is related to the equipment. For example, the SIEA for the medical device [21] should be different from its application for the communication equipment [22]. Second, the design of SIEA should consider the typical imaging terminal application. The intelligent imaging terminal will always be used as a surveillance system or an environment perception device, which indicates the design of SIEA should consider some characterized assessment indices [23] for these applications. Third, the design of SIEA should also consider the application feasibility in the engineering site. When implementing the SIEA task in an engineering site, in order to decrease cost, the engineering developer always wants the number of subject to be small, the assessment speed of the human-involved experiment to be fast, and the stability of assessment result [24] to be large. All these targets raise some new requests for the SIEA design.

In this paper, a human visual perception-based SIEA method and its applications are proposed. First, in contrast to other methods [25], many human visual perception-related indices are utilized to evaluate the subjective imaging effect. They include the image brightness, the image brightness uniformity, the color image contrast, the image edge blur, the image color difference, the image saturation, the image noise, and the integrated imaging effect index. A linear weighted function which utilizes the Analytic Hierarchy Process (AHP) method to estimate its weights is considered to implement the SIEA computation. Second, a SIEA software is developed. This software can play images one by one after the settings of the assessment index, the assessment reaction time, and the maximum assessment degree, etc. Third, two typical applications for the intelligent imaging terminal design are considered to illustrate the usage of proposed system and method. They include the image enhancement system design of the surveillance camera terminal [26] and the environment perception method design of the intelligent lighting terminal [27]. A Prior Sequential Stimulus (PSS) experiment for the intelligent imaging terminal design is also proposed.

The main contributions of this paper include: first, a kind of human visual perception-based SIEA system is proposed. Lots of human visual perception-based subjective assessment indices are utilized here. They are easy to be understood by user and convenient for use in the engineering site. And a kind of AHP-based SIEA computation method is also proposed. Second, an experimental technique which is named as the PSS method is proposed to improve the assessment stability for the intelligent imaging terminal design. And two kinds of typical applications of SIEA are also illustrated. They are instructive to the future design of intelligent imaging terminal [28].

In the following sections, first the implementation method of SIEA method will be introduced. Second, the problem formulations of typical applications for intelligent imaging terminal design will be presented. Finally, some experiment results and discussions will be given.

2. Proposed SIEA Method for Intelligent Imaging Terminal Design

2.1 Assessment Indices

Regarding the subjective image quality assessment, the ITU has proposed some standard and universal evaluation indices and methods [5-7]; however, we don’t think the proposed methods are easy for use when it comes to the intelligent imaging terminal design in the engineering site for two reasons. First, most of proposed indices are not easy to be understood by the non-professional user well. Second, the proposed indices are not complete or systematic for the intelligent imaging terminal design because the standard makers of ITU have to consider the universality when making them. To conquer these problems above, eight indices are proposed to implement the SIEA mission in this paper. The explanations of SIEA indices are shown in Table 1. In Table 1, the better the assessment effect is, the larger the index score should be. For example, regarding the image brightness, if the practical application brightness is too high or low, i.e. the evaluation effect is negative, the score will be low; while regarding the image noise, the score will be high if the noise is low. Finally, for the sake of simpleness, an evaluation index ED which is defined in (1) and (2) is calculated. The definiton of parameter ED is to make the following deduction easier; after all the decision by using only one index is simpler than using eight indices.

\(E D=\sum_{i=1}^{8} w_{i} \times M_{\mathrm{m}}\)       (1)

\(\sum_{i=1}^{8} w_{i}=1\)       (2)

where wi is the weight; Mm is the corresponding assessment result of SIEA index, m∊{IB, IBU, CIC, IEB, ICD, IS, IN, IIE} (see Table 1). The Mm should be a statistical assessment results of multiple subjects.

Table 1. The definitions and the explanations of SIEA indices

E1KOBZ_2020_v14n3_1043_t0001.png 이미지

The AHP method is utilized to estimate the weights of equation (1) and (2). The AHP can rank the multiple factors of complex system by calculating its single-level sequencing and general-level sequencing. Its processing steps inclulde: problem definition, hierarchy establishment, judgment matrix building, single-level ranking and its uniformity inspection, and general-level ranking and its uniformity inspection. When using the AHP to compute the weights for equation (1), three hierarchies are built. Fig. 1 shows the basic structure of the proposed AHP model. The definitions of symbols appeared in Fig. 1 can be found in Table 1. From Fig. 1 it can be seen that the goal hierarchy of APH is to implement the imaging effect assessment; the criteria hierarchy has four elements: the brightness-related assessment element, the contrast-related assessment element, the color-related assessment element, and the integrated assessment element; and the SIEA indices are allocated into the third hierarchy. The scale definition method of “1~9” [29] is employed to assess the importance of judgmental matrix elements. Table 2 shows its definition method. And the evaluations of the judgmental matrix weights are accomplished by the opinions of experts and system users.

E1KOBZ_2020_v14n3_1043_f0001.png 이미지

Fig. 1. The structure of AHP model for SIEA of intelligent imaging terminal design​​

E1KOBZ_2020_v14n3_1043_t0002.png 이미지

​​​​​Table 2. The “1~9” scale definition of the classical APH model

2.2 Assessment Procedure

Like the subjective image quality assessment, the SIEA for intelligent imaging terminal design will use the traditional ergonomics method [30] to implement the assessment experiment. To improve the experimental stability, a PSS experiment is used here. Regarding the PSS experiment, like a preliminary experiment, it will use the viewing data to help subject learn the approximate change trend of image dataset before the formal assessment experiment is carried out; however it is also different from the preliminary experiment which is designed only to help the subject know the experiment procedure. Here the letter “P” (i.e., “Prior”) means the viewing-data-related experiment will be implemented before the formal experiment; the first letter “S” (i.e., “Sequential”) means the image data of viewing dataset is captured sequential when the imaging terminal works; and the second letter “S” (i.e., “Stimulus”) means the visual stimulation. The viewing dataset is a kind of image data which cover the typical change trend of the entire image dataset approximately. In this paper the viewing data can be captured throughout the working period of the intelligent imaging terminal with a uniformly-spaced sample. For example, regarding the video surveillance application, the viewing data can be sampled symmetrically from the entire working period of the surveillance camera.

E1KOBZ_2020_v14n3_1043_f0002.png 이미지

Fig. 2. The implementation flow chart of the proposed SIEA method

Fig. 2 shows the proposed implementation flow chart of SIEA. First, the experimental data of typical intelligent imaging terminal application will be collected. The experimental data should be typical enough to represent the approximate change trend of the imaging terminal. When selecting the “typical” data, the opinions of expert or user can be considered. They should tell the system designer what kind of image data are important and can represent the main imaging effect for the specific application. Second, the proper subjects will be selected. In general, the subjects are the system users themselves. Third, the preliminary experiment will be implemented to teach subject the experiment procedures, details, and some attentions. After this, the subjects should be familiar with the entire details of the formal experiment. Fourth, the formal experiment will be carried out. The experimenter will set the experiment parameters, start the experiment, and control the experiment process in this stage. During this procedure, the subject will take part in both the PSS experiment and the other formal experiment. The PSS experiment is utilized here to improve the stability of subjective assessment experiment; and the other formal experiment is employed to collect the corresponding data. Finally, the experiment results will be recorded and analyzed.

2.3 Assessment Software

A kind of SIEA software is developed in this paper. Fig. 3 shows its software interface. From Fig. 3 it can be seen this software includes 6 typical functions. First, it can select image data by using the dataset selection button from a special storage path. Second it can set the reaction time of subject. There is not limitation of the maximum reaction time in this software. Third, it can select the SIEA indices. Eight indices can be chosen from this software one by one (see Table 1). And in each experiment only one evaluation index will be used. Fourth, it can play image data in the image display region. The background color of that region is black. Fifth, the subject can score the displayed image by using a mouse device. Currently, the largest score degree is 10. Sixth, this software also supports “Start”, “Stop”, and “Exit” functions. When this program starts to work, it can record the experiment details, including the experiment type, the experiment times, and the experiment score results in an excel file; then the corresponding statistics results can be analyzed. If the user misses to give score to some image data, this software will play these missed images again at the end of each experiment otherwise the experiment procedure will not be terminated.

E1KOBZ_2020_v14n3_1043_f0003.png 이미지

Fig. 3. The interface illustration of the SIEA software

3. Formation of Typical Applications

3.1 Assessment Requirement Analysis of Intelligent Imaging Terminal Design

With the development of artificial intelligence technique, the output of intelligent imaging terminal should be friendly to customer. They should be adaptive to the environment changes or the user’s habits. In general, the research target of intelligent imaging terminal design is to find some solutions to make its output to be clear and vivid; or if the imaging output effect cannot be improved, the image with high visual effect should be discerned and recommended to user. When considering the SIEA for the intelligent imaging terminal design, the data size, the experiment times, the experiment period, and the experiment procedure, etc., should be arranged carefully. That means the assessment complexity should be designed sophisticated. After all most of users do not have the patiences to wait for you to accomplish all the tedious assessment experiment. This phenomenon can always be found for some system designs and applications in the engineering site. In this paper, to illustrate the usage of proposed SIEA system and method, two applications are considered: the image enhancement method design for the surveillance camera system and the imaging environment perception method design for the intelligent lighting system.

3.2 Image Enhancement Method Design for the Surveillance Camera System

The aim of image enhancement [31] is to improve the interested or useful information in a degraded image. It at least has two advantages for the practical application: on one hand it can improve the visual display effect for human directly, i.e., people can get a clear image for watch; and on the other hand it can improve the computation effect for the following processing algorithm [32] as well, i.e., the computation robustness of corresponding algorithm can be improved by using this method. The outdoor surveillance camera always suffers from the complex atmosphere, such as the mist, the fog, the haze, or the blaze, etc. Currently, with the development of hardware technique, many surveillance cameras can encapsulate the image enhancement algorithm in its hardware or software system. However, the question is: what is a clear image, or how to define a clear image for these algorithms? Since human is the final user of the surveillance camera; and he or she also is the designer of the image processing algorithm; it is reasonable to use the SIEA to find the clear images for the image enhancement system design.

Fig. 4 shows the schematic diagram and the image samples of the image enhancement system of a kind of surveillance camera. This system can capture visible light image and make an adaptive image enhancement processing by using its background processing machine. In Fig. 4, (a) is the schematic diagram of the surveillance application. Here the “long distance” means the observation distance between the camera and the target (for example it is a forest land in Fig. 4) would be larger than 1000 meters; thus in that state the atmosphere such as the mist or the fog will affect the camera output effect distinctly. Images (b) and (c) are the data samples captured in a summer day of north China by this system; obviously image (b) has a relative clear output while image (c) is affected by the fog seriously. Thus one of most important design targets of this intelligent surveillance terminal is how to develop an adaptive image enhancement method which can resist the influence of complex atmosphere and how to give out a comparable smooth imaging output. Obviously, to solve these problems above, the SIEA method should be used to provide an evaluation benchmark for the image enhancement algorithm design: lots of actual surveillance image datasets are collected and the ergonomics experiements will be used for the imaging quality evaluation.

E1KOBZ_2020_v14n3_1043_f0004.png 이미지

Fig. 4. The schematic diagram and the image samples of the image enhancement method design for intelligent surveillance camera

3.3 Imaging Environment Perception Method Design for the Intelligent Lighting System

The intelligent lighting technique is to use the Light Emitting Diode (LED) lamp, the internet of things sensors, the wired or the wireless technique, and the intelligent data processing method etc. [33] to realize the adaptive control of lighting device so that the energy can be saved or the lighting effect can be tuned. The traditional intelligent lighting uses the photosensitive resistance or the infrared sensor to percept the environment change and control the output effect of LED lamp. Recently, the imaging sensor is proposed to implement the environment perception computation [27] and an elaborated lighting control can be realized. Comparing with the traditional environment luminance measurement techniques, the imaging sensor-based method can get a more precise analysis of the environment lighting; and it can approach a kind of excellent lighting effect. However, the question is what the excellent lighting effect is? Since human is the final user of the lighting system, to answer that question the SIEA can be used here again to solve this problem.

A kind of imaging environment perception-based intelligent lighting system is developed. It can be used for the lighting effect tuning in a complex light environment. For example, if the user employs a wearable visible light camera to capture the front image; then the intelligent lighiting system can employ the analysis results of front images to control the output effect of LED lamp according to the environment light changes. Then the definition of captured image can be preferable [34]. Fig. 5 shows the schematic diagram of proposed application and the image samples captured under different lighting effects. In Fig. 5 (a), a vertical view is shown here and the red points mark out the typical observation positions of camera; and α1=α2#30º, AB#25cm, AC#30cm. In Fig. 5 (b) and (c), the captured image samples are given. From (b) and (c) it can be seen that the imaging differences of these data are comparable small; thus the final design target of intelligent lighting system is to evaluate the lighting effect and recommend the best lighting control method to system user. Obviously, the SIEA method can be utilized here again.

E1KOBZ_2020_v14n3_1043_f0005.png 이미지

Fig. 5. The schematic diagram and the image samples of the intelligent lighting system design application

4. Experiments & Discussions

A series of experiments are designed and implemented to illustrate the applications of proposed SIEA method about the image enhancement method design and the imaging environment perception method design. The proposed SIEA software is developed by C and Matlab on our PC (4.0 GB RAM, 1.70GHz Intel (R) Core(TM) i3-4005U CPU).

4.1 The Experiment Organization Method

The aim of the SIEA for intelligent imaging terminal design is to classify the original image dataset into different categories with respect to their subjective imaging effect degrees. Here an image with high imaging quality means it has an abundant detail, a distinct edge, a fresh colour, a proper contrast, and a low imaging noise, etc. To control the experiment complexity, sixteen subjects participate in the corresponding experiments, eight males and eight females. The research aim is to test the feasibility of proposed system and method. And if the stabilities of proposed system and method are high, fewer subjects can be asked to participate in this experiment in the engineering site in future. The ages of selected subjects are from 22 to 36, and their uncorrected eyesight are better than 0.8. None of them have any ophthalmopathies or other serious diseases. The subjects are the non-professional people of the subjective image quality evaluation, but they have the basic experiences and knowledge of the image surveillance and the intelligent lighting applications.

The basic experiment procedures include 4 steps. First, the experiment preparation should be done. The experimenter needs to collect and organize the corresponding experiment data and apply for the experiment authorization to the ethics committee of the General Hospital of Chinese People’s Armed Police Forces. Second, the experimenter explains the experiment procedures to subjects; then an experiential experiment will be carried out for 2 times. Third, the formal experiment will be implemented. The experimenter needs to select the experiment data, set the SIEA index, the reaction time, and the maximum evaluation degree, and play image for subjects. The subjects will score the images according to the application requirements and their personal experiences. As we have stated above, the formal experiment include both the PSS expeirment and the formal assessment experiment. Fourth, the experiment results will be recorded in the system and the final SIEA score will be computed by the method presented in section 2.1.

4.2 The Results of Image Enhancement-related Experiments

Regarding the image enhancement-related experiment, the images with different definitions will be selected from the surveillance image dataset by subjects. When preparing these image data, first the images captured in the typical weathers are accumulated. In this paper the typical weathers only point to the sunny day and the cloudy day. They are recorded in summer at a forest land of north China. The visible light camera saves one image per ten minutes and its working time is from 6:30 to 18:30; thus the total size of the image dataset is 6122=144. When organizing the experiment data, the corresponding data are classified into two types: the viewing data and the experimental data. The viewing data are those images which are captured one image per hour; obviously they can represent the sketchy change trend of the surveillance image dataset. For example, they are captured at 6:30, 7:30, …, 18:30. The viewing data will be displayed to subject sequentially according to their capture time. Their photography conditions will also be announced to subjects. Since two typical weathers are considered, they will include 2x12=24 images. Differently, the image quantity of the formal experiment is 144. Four sub-datasets will be built and each of them will have 36 images.

Fig. 6 shows the experiment process sketch map. In Fig. 6, a complete experiment includes 4 sub-experiments and each of them will use different experimental data. Each sub-experiment will last 504 seconds; and then a 180s-break will be implemented. Obviously, the total experiment time is 504x4+180x3=2556 seconds. In each sub-experiment three steps are utilized. The first step only plays the viewing data for subject; that is the proposed PSS experiment in this paper. The subjects just watch and need to do nothing. They will learn the sketchy change trend of image dataset during this experiment. Each image will appear for 4 seconds on the screen thus this process will last 4x24=96 seconds. In other two steps, the subject has to score the same experimental dataset for 2 times. In each step the play order of the experimental images is random. In this experiment, each experimental dataset has 36 images and the reaction time is 4 seconds; thus each stage will last 36x4=144 seconds. The total experiment time of each sub-experiment is 96+60x2+144x2=504 seconds. In this experiment the maximum SIEA score degree is 10.

E1KOBZ_2020_v14n3_1043_f0006.png 이미지

Fig. 6. The process sketch map of the image enhancement-related experiment

Some experiment results are shown in Fig. 7. In Fig. 7, images (a) and (b) are the evaluation means of each SIEA index for Fig. 4 (b) and (c), respectively. According to the designed experiment flow, regarding each image 16 subjects will assess each SIEA index for 2 times; thus the mean of each SIEA is the average value of 162=32 sample data. Fig. 7 (c) shows the computation results of ED of the total 72 images which were captured in a sunny day. When computing the ED, Table 3 and Table 4 show the judgmental matrix results of the criteria hierarchy and the alternative hierarchy of AHP; and Table 5 gives out the weights computational results of ED which are deduced by the AHP method. In Table 3 and Table 4, the judgmental matrices come from the opinion of survilliance system experts and users. From Table 5 it can be seen that after using the AHP the importance rank of the SIEA indices is: the integrated imaging effect index, the colour image contrast, the image noise, the image colour difference, the image brightness uniformity, the image edge blur, the image saturation, and the image brightness.

E1KOBZ_2020_v14n3_1043_f0007.png 이미지

Fig. 7. The experiment result samples of the intelligent image enhancement system design

Table 3. The judgmental matrix results of the AHP criteria hierarchy of the image enhancement-related experiment

E1KOBZ_2020_v14n3_1043_t0004.png 이미지

Table 4. The judgmental matrix results of the AHP alternative hierarchy of the image enhancement-related experiment

E1KOBZ_2020_v14n3_1043_t0005.png 이미지

Table 5. The AHP weights of SIEA indices of the image enhancement-related experiment

E1KOBZ_2020_v14n3_1043_t0006.png 이미지

After the calculation of ED, it can be used as a region classification criterion for the image dataset partition of the intelligent image enhancement system design. In this paper the ED region distributions can be estimated by the C-means cluster technique [35]. First, all the computed ED results will be mapped into the region between 0.0 and 10.0. Second, five cluster centers are computed by the C-means method. Here the number of cluster center is proposed by the experts and the system users. In their opinions the images captured by the surveillance camera have strong differences thus the subject can identify five different imaging degrees easily. Third, the middle points of any two adjacent cluster centers will be calculated. Obviously we can get four middle points. Finally the computed middle points will be regarded as the segmentation thresholds of ED. Table 6 shows a kind of ED region classification results in this paper. It also can be regarded as a classification model for SIEA; equation (3) shows its mathematical form. In Table 6, the classification category is 5; and the corresponding ED region distributions and their subjective descriptions are also given. Comparing with the multiple SIEA indices, the ED can reflect the subjective imaging effect completely; and what is more important is it is easy for use because the classification using only one index is easier than the classicization using eight indices together. Finally, the classification results of ED region are the image datasets with different SIEA degrees. They can provide a benchmark for the following computation or system design.

\(D_{S I E A 1}=\left\{\begin{array}{cc} 5 & 8.1324 \leq E D \leq 10.0 \\ 4 & 6.4349 \leq E D<8.1324 \\ 3 & 3.9816 \leq E D<6.4349 \\ 2 & 2.5776 \leq E D<3.9816 \\ 1 & 0.0 \leq E D<2.5776 \end{array}\right.\)       (3)

where DSIEA1 means the degree of SIEA for the image enhancement system design; the larger the DSIEA1 is, the better the SIEA would be.

Table 6. A kind of ED region distribution of the image enhancement-related experiment

E1KOBZ_2020_v14n3_1043_t0007.png 이미지

To verify the validity and the robustness of proposed method, a test experiment is designed. Since the typical images have been classified into 5 categories (see Table 6), first we pick 20 images from 5 datasets above randomly. Obviously, the SIEA results and the ED region distributions of each image data are known. Second we ask the subjects to implement the SIEA experiment again. Third, we can compare the results between the original evaluation result and this test assessment result. The PSS experiment is always necessary in this test experiment. Fig. 8 shows the corresponding comparison results. In Fig. 8, the black lines mark four distribution segmentation thresholds of the parameter ED which are presented in Table 6. From Fig. 8 it can be seen except for column 14 and column 15, other columns can locate in the same distribution region. Those results show the stability of proposed method to some extent. After this experiment above is repeated for many times, the correctness ratio of ED location can be found to be larger than 90%, which also indicates the effectiveness of proposed method.

E1KOBZ_2020_v14n3_1043_f0008.png 이미지

Fig. 8. The test experiment results of the intelligent image enhancement system design

4.3 The Results of Imaging Environment Perception-related Experiment

In this experiment the images with different lighting effects are assessed and classified. When preparing experiment data, the visible light images captured under different lighting conditions are recorded. The camera will be put in the typical positions (see Fig. 5 (a)) and the LED luminance will be tuned from weak to strong linearly. Twenty images are sampled in each position. For example, if the total LED control time in one observation position is 60 seconds, then we can get the dataset by sampling one image per 3 seconds. Finally, we can get 20x6=120 images. When organizing data, according to the PSS experiment method, they also are classified into the viewing data and the formal experimental data. The viewing data are the recorded 5 images in each position. For instance, in the example above we can sample one image per 12 seconds to get them. The size of viewing data is 6x5=30. Finally, the formal experimental data include 120 images captured in 6 typical observation positions. It is split into 4 datasets and each one has 30 images.

Fig. 9 shows the experiment process sketch map. Like the experiment flow of the image enhancement system design, this experiment also has 4 sub-experiments while each sub-experiment has 4 implementation steps. This is because the lighting image is harder for assessment than the surveillance image; that also means the detailed differences among lighting images are small. The sub-experiment includes one step of the PSS experiment and three steps of the formal experiment. The PSS experiment only plays images for subjects while other formal experimental steps will ask subjects to assess the imaging effect of the same dataset for three times. The photography condistions of PSS experiment will be told to subjects, while the image play orders of the formal experiment are random. From Fig. 9 it can be seen that both the PSS experiment step and the formal experiment step will play 30 images, each image will appear in screen for 5 seconds. Thus the sub-experiment will last 150x4+60x3=780 seconds. The complete experiment time is about 780x4+180x3=3660 seconds. The maximum SIEA score degree is 10.

E1KOBZ_2020_v14n3_1043_f0009.png 이미지

Fig. 9. The experiment process sketch map of the intelligent lighting system design

Some experiment results are shown in Fig. 10. In Fig. 10, (a) and (b) are the evaluation means of each SIEA index for Fig. 5 (b) and (c). Fig. 10 (c) shows the ED computation results of 60 images. Table 7 and Table 8 show the judgmental matrix results of the criteria hierarchy and the alternative hierarchy of AHP; and Table 9 gives out the AHP weights computational results of ED. Similarly, the judgmental matrices in this experiment come from the opinions of epxerts and system users. From Table 9, regarding this application, the importance rank of SIEA indices is: the integrated evaluation index, the colour image contrast, the image colour difference, the image brightness uniformity, the image edge blur, the image noise, the image saturation, and the image brightness. Table 10 shows a kind of region classification method of the ED distribution. This region distribution also comes from the computation results of C-means method. The corresponding computational method can be found in section 4.2. It also can be looked on as a classification model for SIEA of intelligent lighting system; and equation (4) gives out its computational form. Because the experts and the system users think it is hard to distinguish the differences among the lighting images, thus the region degree of ED is only three. Finally the classicization results of ED region are some image datasets with different SIEA degrees. They can provide a benchmark for the following intelligent lighting system design.

E1KOBZ_2020_v14n3_1043_f0010.png 이미지

Fig. 10. The experiment result samples of the intelligent lighting system design application

Table 7. The judgmental matrix results of the AHP criteria hierarchy of the imaging environment perception-related experiment

E1KOBZ_2020_v14n3_1043_t0008.png 이미지

Table 8. The judgmental matrix results of the AHP alternative hierarchy of the imaging environment perception-related experiment

E1KOBZ_2020_v14n3_1043_t0009.png 이미지

Table 9. The AHP weights of SIEA indices of the imaging environment perception-related experiment

E1KOBZ_2020_v14n3_1043_t0010.png 이미지

Table 10. A kind of ED region distribution of the imaging environment perception-related experiment

E1KOBZ_2020_v14n3_1043_t0011.png 이미지

\(D_{S I E A 2}=\left\{\begin{array}{lc} 3 & 8.4934 \leq E D \leq 10.0 \\ 2 & 3.2131 \leq E D<8.4934 \\ 1 & 0.0 \leq E D<3.1231 \end{array}\right.\)        (4)

where DSIEA2 is the degree of SIEA for intelligent lighting system design; the larger the DSIEA2 is, the better the SIEA would be.

To verify the effectiveness of proposed method, a test experiment is also carried out. From Table 10, the lighting effects are classified into 3 degrees; thus we select 20 images from these datasets above randomly and let the subjects to carry out the SIEA experiment again. After that we will investigate the assessment results and make a comparison between the original evaluation result and this test assessment result. Obviously, it can be seen that the original SIEA results and the ED results of these selected data are known in advance. When carrying out this experiment, both the viewing data (i.e., the PSS experiment) and the new selected data are used. The experiment flow is same to the sub-experiment presented in Fig. 9. Fig. 11 shows the corresponding comparison results. The black lines mark the segmentation thresholds of parameter ED in Table 10. From Fig. 11 it can be seen except for column 2 and column 17, other columns can get the computation results into the same distribution regions. Those results can indicate the effectiveness of proposed method to some extent. After this experiment above is repeated for many times, the correctness ratio of ED location can be found to be larger than 85%.

E1KOBZ_2020_v14n3_1043_f0011.png 이미지

Fig. 11. The test experiment result samples of the intelligent lighting system design

4.4 Discussions

The study of subjective imaging effect assessment for intelligent imaging terminal design belongs to a classical ergonomics research problem [36,37]. Since human is the final receiver of media data or the designer of information processing algorithm, it is reasonable to let him or her to carry out the effect assessment of media data [38,39]. Recent years, the SIEA technique has been widely used in the information system design and application; however the main problem of it is its divergent and unstable assessment results especially with the limited subjects and experimental times. In many situations, the amount of subject cannot be large because of the time or the finical limitation. To conquer that problem to some extent, on one hand the experiment data size of each subject should be enlarged; and on the other hand, the experiment procedure should be designed carefully. Regarding the former solution, the experiment cost still will be high which can not be acceptable in many situations; and as for the latter solution, the physiological and psychological states of subject also should be checked carefully because the long-time complex experiment procedure will always decrease the patience of subject.

Traditionally, when evaluating the use capability and effect of visual display terminal, the experimenters always design a special operation task, such as the literature read [40], the game play [41], or other human-computer interaction tasks [42], etc. Then the task-related indices are designed to assess the experiment effect. For example, in [43] the authors utilized a reading task to evaluate the comprehension and the workload differences for the video display terminals-based and the paper-based readings. The drawbacks of these methods are obvious: the given task always cannot reflect the varied practical application well; and the assessment indices cannot represent the actual situation completely. In this paper, eight SIEA indices are utilized to assess the imaging effect of the intelligent imaging terminal. The design motivation of them comes from the integrated considerations of the human visual function and the subjective image quality evaluation [44]. From Table 1, it can be seen that the definitions of these evaluation indices can reflect the cognition ability of people comprehensively; and they are not abstract or difficult for use in the practical experiment. Moreover, the ITU proposed methods are not considered here just because their targeted assessment effects for the intelligent imaging terminal design are not strong.

The display effect evaluation technique has been developed for many years however the corresponding research methods do not change a lot. This evaluation technique always sets a human-involved experiment firstly and lets subject to take part into it; and then makes some assessment indices or standards to evaluate the application effect. For example, in [45] the authors utilized a software to study the legibility of Arabic digital typeface. The pixel shape and colour, the matrix pixel density factors were considered. In [46] the authors researched the visual fatigue for the simulated flexible electronic paper. The different surface treatments and illumination conditions were considered. In recent years, some state-of-the-art methods utilize the eye movement data or the electroencephalogram data [47] to assess the working state of subject. Obviously, these methods add the complexity and the cost for the practical application comparing with our method. Table 11 makes comparisons among the traditional method, the state-of-the-art method, and our method for the SIEA application. The corresponding factors, such as the experimental equipment, the subject, the time cost, and the accuracy are compared here. From Table 11 it can be seen that our proposed method has some advantages for the practical application in engineering site.

Table 11. The comparisons among the traditional method, the state-of-the-art method, and our method for the SIEA application

E1KOBZ_2020_v14n3_1043_t0012.png 이미지

In order to improve the experiment stability, the design and the application of the PSS experiment are necessary. After some initial experiment tests, we can find that the SIEA experiment results will be diverging and meaningless if the usage of PSS experiment is omitted. This situation will be serious especially when the number of subject is small. Table 12 shows a comparison result of the evaluation method using the PSS experiment and the method without it. Regarding the experiment in Table 12, 6 subjects are selected to take part in both the image enhancement system design experiment and the intelligent lighting system design experiment. Here the accuracy means the ratio between the correct classification times of the subjective evaluation degree (see Figs. 8 and 11) and the total experiment times. From Table 12 it can be seen that our proposed method has a better processing effect. Like the Double-Stimulus Impairment Scale (DSIS) method [48], the PSS experiment also provides a kind of stimulus to subjects and it can provide them the sketchy cognition of the experiment data. Thus the proposed SIEA method in this paper can be utilized with limited subject. This result also indicates this method can be employed in the engineering site. In addition, to increase the evaluation reliability, sometimes the PSS experiment can be used for many times at the beginning of each evaluation experiment. And in that case their play order should be random; and it is not necessary to tell the subject the same viewing data are used. Then the subjects have to work hard to learn something from the PSS experiment each time.

Table 12. The comparisons between the method using PSS experiment and the method without it

E1KOBZ_2020_v14n3_1043_t0013.png 이미지

Regarding the data captured by the image enhancement system, the SIEA of them is comparable easier because these images are affected by the atmosphere apparently; differently, as for the image data captured by the intelligent lighting system, their image differences are not as apparent as those data collected from the former experiment; thus the evaluation reaction time should be longer and the experiment repeat times should be larger. Fig. 12 shows some challenging experiment data for SIEA. In Fig. 12, (a), (b), and (c) are the data of image enhancement system design; while (d), (e), and (f) are the data of intelligent lighting system design. Comparatively speaking. from Fig. 12, the changes of (a), (b), and (c) can be distinguished. Differently, images (d), (e), and (f) are captured under different lighting effects: the intensity of lighting output becomes stronger and stronger from left to right; however, because of the low imaging contrast, and the display also radiates light itself; the visual differences among these images are difficult to discern by subjects. The further experiment results show: if we ask the subjects to evaluate the subjective imaging effect of the latter image dataset without the PSS experiment, it can be found the corresponding evaluation results will be very disappointed. And from Fig. 8 and Fig. 11, it also can be found that the evaluation results of the intelligent lighting system design experiment are not as stable as those of the image enhancement system design experiment. Thus more experiment times and data [49] may be used in that experiment in future.

E1KOBZ_2020_v14n3_1043_f0012.png 이미지

Fig. 12. The challenging samples of experiment data for SIEA

The experimental differences between two typical applications are also compared here. Table 13 shows the corresponding results of the image enhancement-related experiment and the imaging environment perception-related experiment. From Table 13 it can be seen that the sub-experiment repeat times, the reaction time setting, and the weights of SIEA indices of these experiments are all different. Obviously, the latter experiment will consume more experiment time and computation resources even it only has less experiment data. These phenomena come from the imaging effect difference of the original data of two experiments. Regarding the intelligent lighting system design application, the outputs of display are only the documents currently; as a result its visual effects which are relative to color and detail are not prominent. This factor influences the SIEA results seriously. Our experiment results also indicate if the display outputs are full of color and detail [50], the evaluation result of subjects would be more stable.

Table 13. The experiment method comparisons between the image enhancement system design application and the intelligent lighting system design application

E1KOBZ_2020_v14n3_1043_t0014.png 이미지

5. Conclusion

A kind of human visual perception-based SIEA method for the intelligent imaging terminal design is proposed. Eight SIEA indices are used to assess the imaging effect; they include: the image brightness, the image brightness uniformity, the colour image contrast, the image edge blur, the image colour difference, the image noise, the image saturation, and the integrated evaluation index. A linear weighted function is employed to implement the SIEA computation and the APH method is utilized to estimate its weights. A kind of SIEA software is developed. And the PSS experiment method is proposed to improve the stability of SIEA experiment. Two typical applications are implemented by using proposed method and software. They are the image enhancement method design for surveillance camera system and the imaging environment perception method design for intelligent lighting system. Many experiment details are provided; the similarities and the differences of these typical applications are compared.

References

  1. S. Park, and S.-U. Kang, "Visual quality optimization for privacy protection bar-based secure image display technique," KSII Transactions on Internet and Information Systems, vol. 11, no. 7, pp. 3664-3677, July, 2017. https://doi.org/10.3837/tiis.2017.07.020
  2. L. Guo, Y. Luo, X. He, G. Hu, and Y. Dong, "A method for service evaluation based on fuzzy theory for cloud computing," KSII Transactions on Internet and Information Systems, vol. 11, no. 4, pp. 1820-1840, April, 2017. https://doi.org/10.3837/tiis.2017.04.001
  3. Z. Qin, J. Xie, F.-C. Lin, Y.-P. Huang, H.-P. D. Shieh, "Evaluation of a transparent display's pixel structure regarding subjective quality of diffracted see-through images," IEEE Photonics Journal, vol. 9, no. 4, pp. 7000414-1 - 7000414-15, June, 2017.
  4. J.-Y. Lee, and Y.-J. Kim, "Optimal image quality assessment based on distortion classification and color perception," KSII Transactions on Internet and Information Systems, vol. 10, no. 1, pp. 257-271, January, 2016. https://doi.org/10.3837/tiis.2016.01.015
  5. ITU-R BT.500-13, Methodology for the subjective assessment of the quality of television pictures, January, 2012.
  6. ITU-T P.910, Subjective video quality assessment methods for multimedia applications, April, 2008.
  7. ITU-R BT.1788, Methodology for the subjective assessment of video quality in multimedia applications, February, 2002.
  8. C. Zhang, Y. Yu, Z. Zhang, Q. Wang, L. Zheng, Y. Feng, Z. Zhou, G. Zhang, and K. Li, "Imaging quality evaluation of low tube voltage coronary CT angiography using low concentration contrast medium," PlosOne, vol.10, no. 3, pp.e0120539-1 - e0120539-12, March, 2015. https://doi.org/10.1371/journal.pone.0120539
  9. D. Pal, and V. Vanijja, "Model for mobile online video viewed on Samsung Galaxy Note 5," KSII Transactions on Internet and Information Systems, vol. 11, no. 11, pp. 5392-5418, November, 2017. https://doi.org/10.3837/tiis.2017.11.012
  10. P. Tiefenbacher, V. Bogischef, D. Merget, and G. Rigoll, "Subjective and objective evaluation of image inpainting quality," in Proc. of IEEE International Conference on Image Processing, pp. 447-451, September 27-30, 2015.
  11. M. Klima, P. Pata, K. Fliegel, P. Hanzlik, "Subjective image quality evaluation in security imaging systems," in Proc. of International Carnahan Conference on Security Technology, pp. 19-22, October 11-14, 2005.
  12. A. Conradie, and C. P. Herbst, "Evaluating the effect of reduced entrance surface dose on neonatal chest imaging using subjective image quality evaluation," Physica Medica, vol. 32, no. 10, pp. 1368-1374, October, 2016. https://doi.org/10.1016/j.ejmp.2016.07.005
  13. T. Betchaku, N. Sato, and H. Murakami, "Subjective evaluation methods of facsimile image quality," in Proc. of IEEE International Conference on Communications, pp. 966-970, May 23-26, 1993.
  14. R. Mukherjee, K. Debattista, T. Bashford-Rogers, P. Vangorp, R. Mantiuk, M. Bessa, B. Waterfield, and A. Chalmers, "Objective and subjective evaluation of high dynamic range video compression," Signal Processing: Image Communication, vol. 47, pp. 426-437, September, 2016. https://doi.org/10.1016/j.image.2016.08.001
  15. R. Shi, K. N. Ngan, S. Li, R. Paramesran, and H. Li, "Visual quality evaluation of image object segmentation: subjective assessment and objective measure," IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5033-5045, December, 2015. https://doi.org/10.1109/TIP.2015.2473099
  16. A. K. Moorthy, C. C. Su, A. Mittal, and, A. C. Bovik, "Subjective evaluation of stereoscopic image quality," Signal Processing: Image Communication, vol. 28, no, 8, pp. 870-883, September, 2013. https://doi.org/10.1016/j.image.2012.08.004
  17. E. Upenik, M. Rerabek, and T. Ebrahimi, "A testbed for subjective evaluation of omnidirectional visual content," in Proc. of Picture Coding Symposium, pp. 1-5, December 4-7, 2016.
  18. L. N. Faria, L. M. G. Fonseca, and M. H. M. Costa, "Performance evaluation of data compression systems applied to satellite imagery," Journal of Electrical and Computer Engineering, vol. 2012, pp. 471857-1 - 471847-15, January, 2012.
  19. T. Vuori, M. Olkknen, M. Polonen, A. Siren, and J. Hakkinen, "Can eye movement be quantitatively applied to image quality studies?" in Proc. of Nordic Conference on Human-computer Interaction, pp. 335-338, October 23-27, 2004.
  20. H. E. Khattabi, A. Tamtaoui, and D. Aboutajdine, "Measure a subjective video quality via a neural network," in Proc. of International Conference on Digital Information and Communication Technology and Its Applications, pp. 121-130, June 21-23, 2011.
  21. C. Zhu, Y. Zhao, L. Yu, M. Tanimoto, 3D-TV System with Depth-Image-Based Rendering, Architectures, Techniques and Challenges, Springer Press, 2013.
  22. A. Stoica, C. Vertan, C. Fernandez-Maloigne, "Objective and subjective color image quality evaluation for JPEG 2000-compressed images," in Proc. of International Symposium on Signals, Circuits and Systems, pp. 137-140, July 10-11, 2003.
  23. A. Javaheri, C. Brites, F. Pereira, and J. Ascenso, "Subjective and objective quality evaluation of 3D point cloud denoising algorithms," in Proc. of IEEE International Conference on Multimedia and Expo Workshops, pp. 1-6, July 10-14, 2017.
  24. G.-Y. Gim, H. Kim, J.-A. Lee, W.-Y. Kim, "Subjective image-quality estimation based on psychophysical experimentation," in Proc. of Pacific-rim Symposium on Image and Video Technology, pp. 346-356, December 17-19, 2007.
  25. F. Ribeiro, D. Florencio, and V. Nascimento, "Crowdsourcing subjective image quality evaluation," in Proc. of IEEE International Conference on Image Processing, pp. 3097-3100, September 11-14, 2011.
  26. H. Liu, H. Lu, and Y. Zhang, "Image enhancement for out-door long-range surveillance using IQ-learning multiscale Retinex," IET Image Processing, vol. 11, no. 9, pp.786-795, September, 2017. https://doi.org/10.1049/iet-ipr.2016.0972
  27. H. Liu, Q. Zhou, J. Yang, T. Jiang, Z. Liu, and J. Li, "Intelligent luminance control of lighting systems based on imaging sensor feedback," Sensors, vol. 17, no. 2, pp. 321-1 - 321-24, February, 2017. https://doi.org/10.3390/s17020321
  28. U. Reter, J. Korhonen, and J. You, "Comparing apples and oranges: assessment of the relative video quality in the presence of different types of distortions," EURASIP Journal on Image and Video Processing, vol. 2011, no. 8, pp. 1-10, December, 2011.
  29. D. M. Kim, J. O. Kim, "Design of emergency demand response program using analytic hierarchy process," IEEE Transactions on Smart Grid, vol. 3, no. 2, pp. 635-644, June, 2012. https://doi.org/10.1109/TSG.2012.2188653
  30. B. Siddharth, and A. K. Abid, "Ergonomics investigation for orientation of the handles of wood routers," International Journal of Occupational Safety & Ergonomics, vol. 2018, pp. 592-604, February, 2018.
  31. I. Szabo, J. Sun, G. Feng, et al. "Automated defect recognition as a critical element of a three dimensional X-ray computed tomography imaging-based smart non-destructive testing technique in additive manufacturing of near net-shape parts," Applied Sciences, vol. 7, no. 11, pp. 7111156-1 - 7111156-14, November, 2017.
  32. H. Liu, C. Wang, H. Lu, and W. Yang, "Outdoor camera calibration method for a GPS & PTZ camera based surveillance system," in Proc. of IEEE International Conference on Industrial Technology, pp.263-267, March 14-17, 2010.
  33. S. A. M. Offermans, H. A. V. Essen, and J. H. Eggen, "User interaction with everyday lighting systems," Personal & Ubiquitous Computing, vol. 18, no. 8, pp. 2035-2055, December, 2014. https://doi.org/10.1007/s00779-014-0759-2
  34. S. Park, D. Choi, J. Yi, S. Lee, J. E. Lee, B. Choi, S. Lee, and G. Kyung, "Effects of display curvature, display zone, and task duration on legibility and visual fatigue during visual search task," Applied Ergonomics, vol. 60, pp. 183-193, April, 2017. https://doi.org/10.1016/j.apergo.2016.11.012
  35. X. Wang, J. Huang, Y. Chu, A. Shi, and L. Xu, "Change detection in bitemporal remote sensing images by using feature fusion and fuzzy C-means," KSII Transactions on Internet and Information Systems, vol. 12, no. 4, pp. 1714-1729, April, 2018. https://doi.org/10.3837/tiis.2018.04.017
  36. C. D. McKinnon, J. P. Callaghan, and C. R. Dickerson, "Evaluation of the influence of mobile data terminal location on physical exposures during simulated police patrol activities," Applied Ergonomics, vol. 43, pp. 859-867, January, 2012. https://doi.org/10.1016/j.apergo.2011.12.009
  37. M. Zahabi, and D. Kaber, "Effect of police mobile computer terminal interface design on officer driving distraction," Applied Ergonomics, vol. 67. pp. 26-38, September, 2018. https://doi.org/10.1016/j.apergo.2017.09.006
  38. S. Lee, "Evaluation of mobile application in user's perspective: case of P2P lending Apps in FinTech industry," KSII Transactions on Internet and Information Systems, vol. 11, no. 2, pp. 1105-1117, February, 2017. https://doi.org/10.3837/tiis.2017.02.027
  39. J. Tian, J. Zhang, P. Zhang, and X. Ma, "Dynamic trust model based on extended subjective logic," KSII Transactions on Internet and Information Systems, vol. 12, no. 8, pp. 3926-3945, August, 2018. https://doi.org/10.3837/tiis.2018.08.021
  40. I. Humar, M. Gradisar, T. Turk and J. Erjavec, "The impact of color combinations on the legibility of text presented on LCDs," Applied Ergonomics, vol. 45, pp. 1510-1517, May, 2014. https://doi.org/10.1016/j.apergo.2014.04.013
  41. K. Sui, and W.-H. Lee, "Image processing analysis and research based on game animation design," Journal of Visual Communication and Image Representation, vol. 60, pp. 94-100, April, 2019. https://doi.org/10.1016/j.jvcir.2018.12.011
  42. M.-T. Chen, and C.-C. Lin, "Comparison of TFT-LCD and CRT on visual recognition and subjective preference," International Journal of Industrial Ergonomics, vol. 34, pp. 167-174, February, 2004. https://doi.org/10.1016/j.ergon.2004.02.003
  43. D. K. Mayes, V. K. Sims, and J. M. Koonce, "Comprehension and workload differences for VDT and paper-based reading," International Journal of Industrial Ergonomics, vol. 28, pp. 367-378, April, 2001. https://doi.org/10.1016/S0169-8141(01)00043-9
  44. Y. Fang, J. Yan, L. Li, J. Wu, and W. Lin, "No reference quality assessment for screen content images with both local and global feature representation," IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 1600-1610, April, 2018. https://doi.org/10.1109/TIP.2017.2781307
  45. I. M. Al-Harkan, and M. Z. Ramadan, "Effects of pixel shape and color, and matrix pixel density of Arabic digital type face on characters' legibility," International Journal of Industrial Ergonomics, vol. 35. pp. 652-664, April, 2005. https://doi.org/10.1016/j.ergon.2005.01.009
  46. Y.-T. Lin, P.-H. Lin, S.-L. Hwang, S.-C. Jeng, and C.-C. Liao, "Investigation of legibility and visual fatigue for simulated flexible electronic paper under various surface treatments and ambient illumination conditions," Applied Ergonomics, vol. 40, pp. 922-928, January, 2009. https://doi.org/10.1016/j.apergo.2009.01.003
  47. C. Conte, A. Ranavolo, M. Serrao, A. Silvetti, G. Orengo, S. Mari, F. Forzano, S. Iavicoli, and F. Draicchio, "Kinematic and electromyographic differences between mouse and touchpad use on laptop computers," International Journal of Industrial Ergonomics, vol. 44, pp. 413-420, March, 2014. https://doi.org/10.1016/j.ergon.2014.01.001
  48. R. C. Streijl, S. Winkler, and D. S. Hands, "Mean opinion score (MOS) revisited: methods and applications, limitations and alternatives," Multimedia Systems, vol. 22, no. 2, pp. 213-227, March, 2016. https://doi.org/10.1007/s00530-014-0446-1
  49. H. Talebi, and P. Milanfar, "NIMA: neural image assessment," IEEE Transactions on Image Processing, vol. 27, no. 8, pp. 3998-4011, August, 2018. https://doi.org/10.1109/tip.2018.2831899
  50. T.-J. Liu, "Study of visual quality assessment on pattern images: subjective evaluation and visual saliency effects," IEEE Access, vol. 6, pp. 61432-61444, October, 2018. https://doi.org/10.1109/access.2018.2875759