ANNEX III(Article 4)
EVALUATION OF ASSESSORS AND THE RELIABILITY OF RESULTS IN SENSORY ANALYSES
The following procedures are applicable if scoring methods are used (IDF Standard 99C:1997).
A.DETERMINATION OF THE ‘REPEATABILITY INDEX’
At least ten samples will be analysed as blind duplicates by an assessor within a period of 12 months. This will usually happen in several sessions. The results for individual product characteristics are evaluated using the following formula:
where:
xi1
:
score for the first evaluation of sample xi
xi2
:
score for the second evaluation of sample xi
The samples to be evaluated should reflect a broad quality range. wI should not exceed 1,5 (5-point scales).
B.DETERMINING THE ‘DEVIATION INDEX’
This index should be used to check whether an assessor uses the same scale for quality evaluation as an experienced group of assessors. The scores obtained by the assessor are compared with the average of the scores obtained by the assessor group.
The following formula is used for the evaluation of results:
where:
xi1; xi2
:
see section (A)

;
:
average score of the assessor group for the first and second evaluation respectively of sample xi
n
:
number of samples (at least 10 per 12 months).
The samples to be evaluated should reflect a broad quality range. DI should not exceed 1,5 (5-point scales).
Member States must notify any difficulties encountered when applying this procedure.
Where individual assessors are found to exceed the 1,5 limit for Deviation or Repeatability indices, the Official authority expert/s must perform one or more random ‘Re-performance’ checks on samples graded by them over the next few weeks, or perform one or more ‘Accompanied’ checks with those assessors. Close monitoring is necessary to decide whether to retain their services. Findings should be documented and retained as proof of follow up action.
C.COMPARISON OF THE RESULTS OBTAINED IN DIFFERENT REGIONS OF A MEMBER STATE AND IN DIFFERENT MEMBER STATES
Where applicable, a test must be organised at least once per year to compare the results obtained by assessors from different regions. If significant differences are observed, the necessary steps should be taken to identify the reasons and arrive at comparable results.
Member States may organise tests to compare the results obtained by their own assessors and by assessors from neighbouring Member States. Significant differences should lead to an in-depth investigation with the aim of arriving at comparable results.
Member States should notify the Commission of the results of these comparisons.