SALAD QUALITY MANAGEMENT
Microbiological quality of open ready-to-eat salad vegetables: effectiveness of food hygiene training of management
During September and October 2001, a microbiological study of open, ready-to-eat, prepared salad vegetables from catering or retail premises was undertaken to determine their microbiological quality. The study focused on those salad vegetables that were unwrapped and handled either by staff or customers in the premises where the sample was taken. Examination of salad vegetables from food service areas and customer self-service bars revealed that most (97%; 2,862 of 2,950) were of satisfactory or acceptable microbiological quality, 3% (87) were of unsatisfactory microbiological quality because of Escherichia coli levels in the range of 10(2) to 10(5) colony-forming units per gram. One (<1%) sample was of unacceptable microbiological quality because of the presence of Listeria monocytogenes at 840 colony-forming units per gram. The pathogens E. coli O157, Campylobacter spp., and salmonellas were not detected in any of the samples examined. The display area for most food service and preparation areas (95%) and self-service salad bars (98%) that were visited was judged to be visibly clean by the sampling officer. Most self-service bars (87%) were regularly supervised or inspected by staff during opening hours, and designated serving utensils were used in most salad bars (92%) but in only a minority of food service areas (35%). A hazard analysis system was in place in most (80%) premises, and in 61%, it was documented. Most (90%) managers had received food hygiene training. A direct relationship was shown between increased confidence in the food business management and the presence of food safety procedures and the training of management in food hygiene.
Lot Acceptance Testing for Ready-to-Eat Salads
April 1, 2015
Quality assurance (QA) managers routinely have product and raw materials tested for undesirable bacteria. As long as the results are negative, it is easy to feel that all is right with the world and that the operation’s quality and safety systems are functioning correctly. In the ready-to-eat (RTE) salad marketplace, some customers are demanding microbiological testing as part of lot acceptance. These demands for testing are probably driven by past outbreaks and recalls. This testing comes in three basic flavors: raw material testing in the field, raw material testing at receiving and finished product testing. This article examines the deliverables of a typical acceptance program of each flavor, the attributes of a risk-based acceptance testing program and ultimately how to incorporate acceptance testing into a risk-based safety program.
Discussions of acceptance testing can easily be complicated by focusing on the details and exactness of calculations. To partially avoid this pitfall, a simple case study has been included to assist the interested reader through the math as applied to a single lot. This article will call on this example to illustrate important points of discussion. Additionally, as noted in the discussion, other choices will be made without exploring all the alternatives in the interests of brevity. These choices do not change the conclusions of this article.
Analysis of Testing
An analytical testing protocol is fundamental to any acceptance testing program, including those for examining the microbiological safety of RTE salad. In reality, no microbiological test procedure is perfect. There are always some false positives and false negatives in presence-versus-absence testing typically used in acceptance testing. To simplify this discussion, we will assume that both these error rates are negligible. The result of any microbiological test, be it presence or absence or an enumeration, is always associated with a volume or mass of sample. This yields a detection limit for the procedure that is extremely important in an acceptance testing program. The detection of the microorganisms is generally based on antibodies, PCR markers or growth on or in some type of selective media. A few exceptions are in the marketplace, including a phage-based procedure. However, the type of detection is not the focus of this discussion. For this article, we will assume that a hypothetical analytical procedure performs flawlessly with up to a 300-g sample of RTE salad or salad ingredients. A 300-g sample is large for a laboratory to handle on a routine basis. This large sample size provides the maximum practical sensitivity. Furthermore, we will assume that this hypothetical method will detect as little as a single colony-forming unit (CFU) of any pathogen of interest in the tested sample. The detection limit of this ideal method is therefore 1 CFU per 300 g of sample. Multiple samples are required in aggregate to detect levels below this detection limit.
With this idealized microbiological method in hand, we can consider field testing acceptance programs. The performance of a field testing acceptance program will depend on the number and size of the samples taken. Although somewhat counterintuitive, field size, distribution of samples and distribution of contamination are not factors involved in the performance calculations. The size of the field is of minor significance because the total mass of the samples is insignificant when compared with the amount of product in the field. This simplification is the infinite lot approximation, which is generally acceptable until the total amount of sample reaches between 5 and 10 percent of the lot. This is analogous to assuming that cards are returned to the deck when calculating the odds for various hands at poker. In the case of field testing, it is an easily justifiable convenience to ignore field size.
The specific distribution of the samples taken and the specific distribution of the contamination in a field will impact the validity of a specific acceptance or rejection decision, but the laws of large numbers and probability will win out, negating these impacts in aggregate. One can easily imagine specific combinations of samples and distributions of contamination in which the acceptance program will mistakenly accept or reject a field. With foreknowledge, one could easily take corrective action and ensure the proper disposition of the field. In the real world, we lack this knowledge and can therefore do no better than a random sample. This applies even when the contamination is clustered. Deviations from randomness will impact individual decisions but will not affect the probabilities that ultimately dictate the performance of an acceptance program. The average performance of the acceptance program is driven by probability, which we can calculate on the basis of the number of samples and their size. This is why the 100-mL samples in the fanciful example found nothing, but the more extensive one bottle per case found many affected bottles. The bottler’s testing program lacked the sensitivity to observe what was determined to be an unacceptable consumer risk. Negative results do not necessarily demonstrate safety.
As an example of a specific acceptance testing program for accepting or rejecting fields, we will assume that ten 300-gram samples are collected from every field and that these samples are randomly collected with plenty of grabs or specimens, as there is no better approach without additional knowledge. Unfortunately, these numerous grabs only make the samples more representative of the lot and do not increase the sensitivity of the inspection. This is an expensive program and exceeds what is normally found in the marketplace. One can calculate the operating curve (OC) for this inspection with a zero tolerance for pathogens as illustrated in Figure 1 by the curve for n = 10. From this graph, we see that low concentrations of organisms, less than 0.0002 CFUs per serving, are essentially never detected. Fortunately, as will be discussed later, there is little concern below 0.001 CFUs per serving. High concentrations, over 0.2 CFUs per serving, are essentially always detected. At this level, as will be discussed below, about 1 in 5,000 consumers will get sick. For comparing OCs, a typical point of comparison is 95 percent detection, 0.15 CFUs per serving, where 95 percent of lots at this level will be rejected. Conversely, 5 percent of lots at this level of contamination will be accepted. This graph can be moved to the left to detect lower levels of contamination by exponentially increasing the number of samples collected, as seen by the curve for n = 100 in Figure 1. The horizontal axis is log scale, muting the impact of modest increases in sample number. We will consider whether this program is sensitive enough to meet an acceptable level of risk when we consider a risk-based acceptance program later in this article.
A Simple Case Study
A lucky bottler has found the fountain of youth and is bottling the water in 24-count cases of 1-L bottles. The test run of 100 cases was so well received that the bottler packed another 100,000 cases. Unfortunately, by the time the original 100 cases were consumed, 10 individuals had contracted a bacterial aging disease that added decades to their apparent age while everyone else looked years younger. The bottler, concerned about his business, immediately commissioned 100 water tests (100 mL each), looking for problems. (Assume the test was 100 percent accurate.) No pathogenic microorganisms were detected in these 100 tests. Still feeling a little uneasy, the bottler decides to do some additional testing to assure his customers of the safety of his product. The bottler elects to test one entire bottle (1 L) from each case for all 100,000 cases. He is both relieved and concerned when 417 of the 100,000 test bottles are positive for the pathogen. So he retains you as an expert to assess his situation. How would you answer the following questions?
1. What would have been the risk to consumers if the bottler had sold the 100,000 cases without testing? What percentage of customers would have become ill?
2. Why were none of the 100 tests positive for the pathogen?
3. What would be the risk to consumers if the bottler sells the 99,583 cases of 23 bottles where the test was negative?
4. What would be the risk to consumers if the bottler sells the 417 cases of 23 bottles where the test was positive?
5. Has the acceptance testing reduced consumer risk?
6. What should the bottler do prior to packing another run?
1. Based on 10 illnesses due to 100 cases of 24 bottles assuming the pathogen distribution remains unchanged, the risk to consumers is 10 illnesses per 2,400 bottles or about 0.4167% of consumers would have been affected.
2. The contamination rate is 10 infective doses/2,400 L or 0.004167 pathogens per liter, assuming a single organism is infective. The 100 samples in aggregate are only 10 L. A pathogen would only be detected slightly over 4% of the time in these 100 tests. To have 95 percent confidence in detecting this level of contamination, one would need to test over 7,100 water samples of 100 mL each.
3. The consumer risk for the 99,583 cases would be slightly higher than for the untested lot because about 4% of the good bottles have been removed. The risk to consumers would be 0.435% (24/23 × 0.4167%).
4. The consumer risk for the 417 cases will be substantially lower than for the untested lot of 100,000 or for the 99,583 cases because the testing has removed pathogens. This computation must be done as a conditional probability. Assuming that pathogens are randomly distributed in bottles with a mean incidence of 0.004167 pathogens per bottle, 90.48% of cases will be free of pathogens, leaving 9.512% with one or more pathogens. Given the one positive, all 417 cases must be in this 9.512%. We can look at the fraction of cases that had 2, 3 or 4 pathogens to calculate the number of pathogens remaining after removing the positive bottle and calculate the remaining consumer risk of 0.221%.
5. The simple answer is no. Even after the testing, no product should be sold given the severity and number of the illnesses that can be expected. Given the lower risk, the idea of selling the cases where a positive bottle was detected may be tempting, but this is counterintuitive for acceptance testing and is only a small amount of product. Cases are unfortunately an artificial lot and must be considered in large aggregate. The reduction in total expected illnesses reflects only a reduction in the amount of available product.
6. The obvious answer is, something different. A process to reduce the expected population to less than the tolerance level, maybe one in a billion, is the clear choice. If the process is a little weak, a raw material testing program to ensure that the incoming load does not exceed the process capabilities of the treatment would be desirable. The process could be chlorination, ozone, radiation or UV treatment, thermal treatment or any other known ways to kill pathogens. Some form of process validation would be appropriate.
Testing at Receiving
The second flavor of acceptance testing is testing at receiving. Under this type of program, the processor defines a lot as a collection of totes or pallets at receiving. This is an arbitrary designation that merits closer scrutiny of its actual validity, but such scrutiny is beyond the scope of this article. As an example, we will assume that product is received in 2,000-pound lots that are sampled for one 300-g sample that is randomly selected as multiple specimens from the entire 2,000 pounds. Again, these specimens do not affect the sensitivity but only potentially make the sample more representative of the specific 2,000 pounds. We can again generate an OC for this acceptance testing program with a zero tolerance, assuming an infinite lot as illustrated in Figure 1 by the curve for n = 1. Note that a 95 percent detection level is 1.5 CFUs per serving or 10× higher than the field program we just considered. This difference is driven entirely by the number of samples used to make the decision. This acceptance program is less sensitive than the field testing program because only one 300-g sample is used to make each acceptance or rejection decision in spite of the smaller designated lot size.
Finished Product Testing
The final flavor of acceptance testing is based on finished product testing. There are many strategies for collecting specimens and compositing them into samples. None of these choices will impact the sensitivity of the program. They will impact the accuracy of individual determination like the grabs and specimens in the two other flavors we have examined. These programs typically sample less than 300 g of material for a decision. Thus, the assumption of an OC of n = 1 from Figure 1 will overestimate the sensitivity of a typical finished product acceptance program, making the typical finished product program the least sensitive of the three flavors. Having examined all three flavors of programs, we can examine the characteristics of a risk-based program and make some comparisons.
The most important requirement for a risk-based lot acceptance testing program is a target tolerance. Unfortunately, discussion of a tolerance for pathogens in a product is as awkward as discussing insect parts in peanut butter. Bad stuff is not supposed to be in food, but it is. It is important to remember that such a discussion will have no impact on the fact that pathogens are present on products in the marketplace despite all the testing being done. Inspection programs leak. Processes are not perfect. The goal of a lot acceptance program for RTE salads must be to ensure that these rare pathogens are at vanishingly low concentrations that have no importance. Zero is not achievable.
A number of metrics can be used to design or select a target tolerance to ensure that any remaining pathogens have no importance. For this article, we will use consumer risk of illness. The regulatory zero tolerance for pathogens is not useful for this purpose because no inspection can guarantee zero pathogens. In the case study, a consumer risk of illness of 0.42% was considered unacceptable for the antiaging water. One can postulate that a consumer is willing to accept up to a 1 in 1 million chance of getting ill from eating a serving, 150 g, of RTE salad. There is a tacit assumption that the risk of illness is the limiting safety constraint. This 1 in 1 million rate is an arbitrary choice that feels good. There are arguments to move the value up or down that will be ignored here. Without getting too tied up in the math, one can accept a simple exponential dose-response model [one cell can initiate infection (no threshold); organisms are randomly distributed in the serving; and host-pathogen interaction is a constant] where 1 CFU in a serving has a 0.001 probability of causing an illness. Researchers often advocate more complex models for a dose response, but these models would only add unwarranted complexity to this discussion and would not materially change the conclusions. With these constraints, one can calculate a tolerance of 0.001 CFUs per serving. Figure 2 illustrates the log-to-log relationship between risk and dose expressed as CFUs per serving. Outbreak risk has been previously used as a metric for defining importance and yields similar numbers.
This 0.001 CFU per serving tolerance is the maximum level that a consumer is postulated to willingly accept. Most of the RTE salad must be less than this tolerance or there would be no product to accept with the risk-based acceptance testing process that is being designed. In fact, depending on the specific product, the efficacy of the grower’s Good Agricultural Practices (GAPs) program and ability to control cross-contamination in the processing plants, the background rate should be about 1 percent or less of this tolerance level. All acceptance testing programs leak. This is partly why zero risk can never be achieved. For an acceptance testing program, this leakage rate is usually set at 95 percent rejection of lots at the tolerance. Conversely, this means that 5 percent of the lots at the tolerance level will be accepted. Obviously, lots with a greater level of pathogens would be rejected more completely and those with less will be rejected less completely, as illustrated by the OC in Figure 1. This 95 percent rejection rate will be used to generate the operating curve for the acceptance process using the idealized analytical procedure described above.
Taking the 0.001 CFUs per serving tolerance, the 95 percent rejection rate and the perfect analytical procedure and applying the Poisson approximation of the binomial equation for an infinite lot, one can readily calculate that about 1,500 samples at 300 g each (almost 1,000 pounds) are required for the acceptance sampling program with no positives. The OC for this inspection protocol is illustrated in Figure 3 along with the three previous OCs. It should be noted that this new OC is substantially to the left of the previous three as would be expected with more than 10× more samples. There will of course be arguments that 1,500 samples, or about 1,000 pounds of testing, is unrealistic. However, this number is driven by the selected tolerance.
In the simple case study, the bottler elected to do 100 tests. He found nothing, which is not surprising. Here, too, one can do fewer samples, but fewer samples will not achieve the desired tolerance on a lot-release basis. The impact of too few samples is readily illustrated using colored beads from an urn as a model, as shown in Figure 4. This graph shows that when too few samples are drawn, one gets many cases where no colored beads are drawn and would therefore believe that all the beads are white. When a colored bead is drawn, one believes that there are more colored beads present in the lot. As the number of beads drawn increases for each test, it becomes increasingly possible to estimate the actual percentage of colored beads in the population with a test. Even with few beads per test, the average of many trials will actually correctly determine the percentage of colored beads, because the rules of large numbers and probability will always hold. If one applies these observations to the RTE business, it explains the occasional positives that cannot generally be confirmed and the mostly negative results. The numbers of samples in the three flavors of acceptance programs discussed above are inadequate for acceptance testing against this 0.001 CFU per serving tolerance.
Assuming that one is willing to test the 1,000 or so pounds necessary, one must consider how to make the sample representative. A more representative sample will ensure that the testing yields a more accurate result regarding a specific lot. Again, though, the rules of large numbers and probability will always hold. On average, lots at the tolerance will be detected 95 percent of the time in spite of the sampling protocol. Without prior knowledge, the best sample will be a random sample. Without prior knowledge, dividing the samples into more pieces or specimens is better as it is more tolerant of clustering or layering, which might be present in a specific lot. It should be noted that 1,000 pounds of product will much better represent a field than ten 300-g samples (about 7 pounds), which is at the high end of testing programs for field testing.
To achieve the desired level of consumer risk (1 in 1 million chance of illness) just by testing is not practical. None of the current flavors of acceptance testing are even close to this tolerance. These types of arguments have forced manufacturers in most segments of the food industry away from lot acceptance programs in favor of process control. The cost of monitoring all production this intensely to ensure safety is prohibitive. Furthermore, the result of this intense testing is only a decision to accept or reject a lot as meeting or not meeting a tolerance. In the fanciful case study, after all the testing, the only reasonable decision is to not sell the 100,000 cases. Repeatedly making this same type of decision for RTE salads is not a path to success. At best, it will only postpone an outbreak of consumer illness.
At this point, it is hoped that the reader sees the many parallels between the simple case study and the RTE salad problem. As for the water, an approach that extends beyond testing is required. An approach involving a characterized process and testing to ensure that the raw product does not exceed the process capabilities is required. The process for RTE salads will be less robust than for water (unless one is concerned with preserving its antiaging properties) given the sensitivity of the RTE materials to potential processes. The RTE salad process must at a minimum avoid cross-contamination. If an RTE salad process could consistently deliver over 1.5 logs of lethality for all the organisms of interest, the risk of illness would be tremendously reduced to less than the 1 in 1 million tolerance we have been discussing, as shown in Figure 5. A more potent process will allow further reductions in potential consumer risk. This figure illustrates the impact of a reliable process on reducing the potential consumer risk with a practical testing program to ensure that raw material does not exceed a tolerance beyond the process capabilities. The RTE salad process must address the potentially present lower concentrations of pathogens where testing has no impact, yet the potential pathogens cannot be ignored. The combination of a realistic testing program and a robust process can greatly diminish potential consumer risk of illness.
It is important to note that this combined strategy does not rely on any knowledge of the frequency distribution of pathogens in RTE materials. It simply examines the whole range that could be present. This frequency distribution is not known and is unlikely to be known given the limited sensitivity of testing and monitoring methods. The best data indicate that the average incidence is very low. This proposed strategy relies on the successes of the current GAPs programs but includes a modest amount of testing to verify that the GAPs continue to function as expected and prevent the process from being overwhelmed. This type of approach may prove very useful as RTE processors seek to comply with the Food Safety Modernization Act by bounding the potential risk to consumers. Every processor of RTE product needs to understand the capabilities of its process.
SALAD QUALITY INSPECTION
This study aims to determine the quality control of fresh salads which sold in Ibb city, Yemen in a street shops, cafeterias, and restaurants by analyzing the microbial parameters and focusing on the coliform bacteria group. Total samples determined reached to fifty samples collected randomly through standard procedure from different sources. Samples analyzing was carried out in the laboratories of Ibb university in 2013, and the results could be summarized as the following: Total bacteria count ranged from Zero to 33.9 × 10 4 CFU and total coliform count ranged from Zero to 21.7 × 10 4 CFU. Meanwhile, total fungi count ranged from Zero to 12.3 × 10 2 CFU. On the other hand species of the coliforms were Enterobacter, klebsiella, E.coli and Citrobacter by percentages 24, 22, 20, and 16 %, respectively. Meanwhile, anaerobic bacterial growth in some samples were 10% and 8% of the samples were no growth has occurred.
Microbial quality of ready-to-eat vegetable salads vended in the central business district of Tamale, Ghana
Godwin Abakari, Samuel Jerry Cobbina & Enoch Yeleliere
International Journal of Food Contamination volume 5, Article number: 3 (2018) Cite this article
Food safety problems still persist across the globe and remain a challenge to the general public and government. The study determined the microbiological quality of pre-cut vegetable salads sold in the Central Business District (CBD) of Tamale.
A total of thirty (30) salad samples were purchased from four zones of the District and transported to the Spanish Laboratory of the University for Development Studies, Ghana for analysis. Standard microbiological methods that are in accordance with American Public Health Association (APHA) were used in determining the presence and levels of bacteria in the salad samples. Escherichia coli were detected in 96.7% of salad samples with levels ranging from 0 to 7.56 log10 cfu/g. Bacillus cereus were present in 93.3% of ready-to-eat vegetable salads with counts ranging from 0 to 7.44 log10 cfu/g. Further, Salmonella spp. and Shigella spp. were present in 73.3% and 76.7% of salads, respectively.
Salmonella spp. and Shigella spp. counts ranged from 0 to 4.54 log10 cfu/g and 0 to 5.54 log10 cfu/g, respectively. Statistically, Escherichia coli, Bacillus cereus and Shigella spp. Contamination varied significantly (p < 0.05) across the four zones demarcated. However, Salmonella spp. contamination did not vary significantly (p > 0.05) across the zones.
The study revealed that salads sold by street food vendors in the CBD of Tamale were unwholesome for human consumption and could be deleterious to the health of consumers. The contamination could be attributable to the source of production of the vegetables and improper food handling. It is recommended that the Food and Drugs Authority should enforce strict compliance to food quality standards at all food vending establishments in the CBD.
Food safety has become a serious concern and a major focus for many scientists in recent years. Also, the interest of the public on food safety issues is on the ascendancy worldwide (WHO/FAO, 2015). Nonetheless, food safety problems continue to persist across the globe and remain a great challenge (Ntuli et al., 2017). It has been established that the business of food vending has created jobs and contributes significantly to the informal sector of the economies of most countries across the globe and as well resolves serious issues confronting major social problems in less developed countries due to the sector’s role of providing inexpensive meals to consumers (Alimi et al., 2016). Notably, Estrada-Garcia et al. (2002) reported that in 1998 approximately 28.5% of the work force in Mexico were said to be employed in the informal sector, in addition 30.8% of the Informal sector’s activities were in the food vending business employing about 120,000 people. It is worth noting that, the activities of most food vendors and practitioners especially street food vendors usually go on unregulated mainly due to negligence and lack of enforcement of the laws governing food safety resulting in serving unwholesome foods to the populace (Alimi et al., 2016).
The consumption of vegetables and vegetables products are vital for the total health of every individual, however, microbial contamination of these vegetables has become a serious challenge deserving of greater attention. Globally, Salad vegetables are one group of vegetables which are a major component of food vending and mostly implicated in this regard.
Salads are fresh vegetables which require minimal washing and processing and cut into desired shapes and sizes with knives or other shredding utensils and usually serve as along with other foods including rice (Ababio and Lovatt, 2014). Worldwide, salad vegetables are considered a major source of nutrients for people and particularly as sources of cancer fighting agents for the skin (Ramteke et al.,2016). Recent studies have established that consumption of salad vegetables can prevent heart diseases and skin cancers (Coulibaly-Kalpy et al., 2017).
Salad vegetables are mostly consumed due to their nutritious components as well as their gustatory attributes when consumed in combination with other foods, which is sometimes as result of the culinary prowess of the food vendors (Choudhury et al., 2011; Alimi et al., 2016).
Salads are also sources of vitamins, minerals, proteins and relevant nutritional components for the proper functioning of the human body (Amoah, 2014). However, ready to eat food like vegetable salads are major potential sources of entropathogens and food borne illness (Mensah et al., 2002). Feglo and Sakyi (2012) recorded various levels of Staphylococcus aureus, Bacillus species, Klebsiella pneumoniae, Escherichia coli in different ready-to-eat foods in the Kumasi metropolis of Ghana. Salmonella, Shigella, Escherichia coli (E. coli), Clostridium, Staphylococcus, Campylobacter, and Vibrio are some of the common bacteria that cause food-related illness (Amoah, 2014).
Mensah et al. (2002) examined 511 ready to eat food in Accra and reported the presence of mesophilic bacteria, Bacillus cereus, E. coli, Staphylococcus aureus, Enterobacteriaceae and Shigella sonnei in most ready to eat foods. Similarly, bacteria such as Salmonella species, Staphylococci aureus and Escherichia coli, which can be conveyed by food, cause food poisoning and food-borne illness such as tuberculosis, typhoid fever and cholera (Foskett et al., 2003).
SALAD QUALITY CONTROL
Fresh-cut salads are ready-to-eat foodstuffs with a growing market share and increasingly popular with consumers. However, a significant part of public opinion considers that bagged salad production processes have an effect on sustainability. In parallel, fresh-cut salads producers implement high resources and innovation strategies to improve the production process and product sustainability, highlighting an increasing awareness of their responsibility. The objective of this study was to investigate whether a correspondence exists between consumer preferences and the fresh-cut salad sustainability attributes (environmental, economic and social), indicated by producers (on their packaging and/or company website). Consumer preference analysis of 12 attributes of fresh-cut salads was made using the Best-Worst scaling methodology. Among the selected attributes, 9 were related to sustainability issues and 3 to the intrinsic product characteristics. A paper questionnaire was developed and submitted directly to consumers (n = 216), at different points of sale of several large retail chains in the Turin metropolitan area (Northwest Italy). The analysis of the results highlights that no direct correspondence can be found between the companies’ communications regarding sustainability and the real interest of consumers of fresh-cut salads towards these attributes. Moreover, in contrast to the growing ‘green’ attitude among consumers, the lack of consumer interest in the attributes of environmental sustainability underlines the need to increase consumer awareness of the issue. Thus, this research could contribute to the development of more targeted and accessible communication strategies towards consumers.
Consumers are becoming increasingly attentive and aware during food purchases, directing their choices towards sustainability choice attributes . Sustainable consumption can be defined as “consumption that simultaneously optimizes the environmental, social and economic consequences of acquisition, use and disposition to meet the needs of present and future generations” .
This attitude in the “consciousness for sustainable consumption” (CSC) model, defined by Balderjahn et al. (2013)  influences product evaluation mechanisms with regard to aspects of environmental sustainability (awareness of the importance of environmental protection during the production process, consumption and disposal of the product) [4,5]. It also includes social issues (respect for human rights, rejection of discrimination and child labor, fair compensation, in addition to revival of local products, linked to tradition and territory) and economic issues (which guarantees a profit, as well as the survival on the market of small local businesses run, for example, at family level) [6,7,8]. A product differentiation strategy on the market must therefore include the development of sustainability attributes that reflect environmental and social pressures, in line with consumer preferences . In recent years, agricultural producers have introduced innovative production and processing systems in order to achieve high standards of process and product sustainability and innovation in consumer communication strategies . The latter focus on distinctive aspects related, for example, to the lessening of environmental impacts , the use of recycled or recyclable materials in packaging [12,13,14] and the reduction or elimination of pesticide and insecticide use [15,16]. In addition, producers are also focusing their product innovation and differentiation strategies on aspects related to social sustainability, linked, for example, to the revival of traditional recipes, local/territorial origins and a short supply chain [17,18,19].
These production guidelines are in line with international voluntary certification standards, such as those issued by the International Organization for Standardization (ISO) (i.e. ISO 14001, ISO 14025), which address the problem of sustainable production, including economic, social and environmental needs [20,21]. One of the most important private standards for primary production is the GlobalGAP system. GlobalGAP is voluntary and sets certification standards and procedures for good agricultural practices, food safety, environmental protection, food traceability, health and safety of employees and animal welfare .
In Italy, constant growth has been recorded in fresh-cut fruit and vegetable consumption and, more specifically, of fresh-cut salads. Regarding these products, consumers recognize the health benefits, the high-service content and the maximum efficiency of the product linked to the reduction of domestic waste . In the first three months of 2019, sales of fresh-cut salads in Italy increased by 6.7% and the volume purchased also grew by 9.8%, compared to the same period of 2018 . Moreover, according to Nomisma research on Nielsen data from 2017, Italy is the leading European country in terms of per capita consumption, with over 1.6 kg of fresh-cut salads consumed per person each year .