Summary Practice guides are an important tool for knowledge transfer. They allow knowledge to be organized and put into operational form for use by professionals. They are a relay tool, and they are targeted by the knowledge transfer stream of the Réseau de recherche en santé et sécurité au travail du Québec as an important and even central issue in occupational health and safety (OHS). In this regard, our specific subject of interest is the use and adoption of practice guides. According to a survey conducted by Laroche (2009) on Canadian OHS researchers, a third of the respondents had been involved in the preparation of a practice guide in the last five years. It was in this context that a first simple question arose within the study team: Do we know what is a high-quality practice guide and how to evaluate it? Are there assessment criteria or grids that would enable us to make a preliminary analysis? And subsequent questions arose: What do we know about the use and adoption of practice guides? Do they achieve the anticipated results? Since little research has been done on practice guides in the field of OHS, we turned to other fields for our knowledge review and quickly saw the magnitude of the work done in the area of healthcare. Both the production of clinical practice guidelines (CPGs) and the number of studies on the subject have risen spectacularly over the past two decades. We therefore selected that as our focus, with the objective of grasping the logic and coherence for a better understanding of the developments and issues. However, given the huge volume of literature, it would have been unrealistic to attempt an exhaustive review. We therefore opted for an approach designed to identify benchmarks that could be useful to OHS actors in the development, appraisal and implementation of practice guides. This approach covers two main areas of research and development, which are overlapping to some extent. The first is focused on the CPG itself: how to develop, assess and implement it. The second is focused on end users: How do they use the CPG? What do they think of it? What are they looking for? A first observation arising from our literature review was the extensive involvement of public-sector organizations, and even governments, in the development of CPGs, since they are seen as a cornerstone of health policy. CPGs have the potential to improve the efficacy of policies by promoting evidence-based medicine. In the initial years when CPGs first started to be developed, national working groups were set up—mainly in English-speaking countries—to decide on and formulate development rules based on the sorting and grading of evidence. Basically, CPGs are collections of statements and recommendations. We speak of clinical practice guidelines (as opposed to guides). This is understandable in healthcare but obviously less appropriate in OHS, where practice guides and practice have a broader meaning. Nevertheless, some international organizations such as the International Labour Office have started adopting the concept of guidelines. Generally speaking, the development of CPGs is approached very methodically in the healthcare field, and each step is carefully defined. However, we were unable to find—and this is a criticism—any studies showing that CPGs developed in this way are of better quality. Follow-up studies conducted on developers also showed that they adhere to the rules only in part, as the procedures are considered too cumbersome and too costly to implement. In addition, the guidelines are mostly developed by professionals who make it their specialty—an approach that we do not see as appropriate for OHS. As for CPG assessment tools, they follow the same logic: they basically evaluate the development process, or at least its description. Application of these tools shows that the CPGs meet the criteria only very partially. The criticism offered above applies here too: the tools evaluate the development process, or rather the description of the process, and not the quality of the CPG itself. We were thus unable to find a satisfactory answer to our first question: when one consults a CPG, is there an instrument for assessing its content, structure, organization, etc.? That being said, the questions brought up in this study and the results obtained offer a wealth of lessons for OHS; they helped us situate our own issues and perspectives, which are progressively laid out in the report. In general, our work highlighted three main criticisms in connection with CPGs (recommendations): they do not adequately take into account all factors (i.e., they are not systematic enough); they are not adequately aligned with the practitioner's decision-making process (i.e., they are based on a problem-solving logic); and they do not adequately allow for the fact that each patient is a unique person (not a group or an average patient). These criticisms partially call into question the knowledge development process. The questions raised, and the resulting debates, are very relevant to the field of OHS. In addition, the studies conducted on end users show that users are in favour of streamlining efforts, but the follow-up studies show that the recommendations are not applied to the extent anticipated. The primary reason is not that people don't adhere to the guidelines, or don't know about them. Rather, the barrier lies in the conditions of application, the contexts and the effort to be invested. Users are also wary of any coercion attempting to substitute itself for their own judgment. Given the efforts invested, these results are disappointing. The impact studies confirm that the anticipated effects have not been achieved. This has led researchers to focus more on implementation and to try to understand what hinders or promotes the use of CPGs, recognizing that too little attention has been given to this question. We retain three key results in this connection: the possibility of seeing (e.g., someone using the guide, and what results or benefits come of it), adequate conditions (e.g., human and material resources, organization, support) and the effort to be invested. Lastly, in terms of implementation, all the studies show that passive methods such as dissemination are rather ineffectual, and that an implementation plan using a combination of methods is needed. However, none of the studies on this subject yielded any truly effective methods or combination of methods. The impacts remain modest at best. We conclude that no implementation strategy will be successful if CPG content is not sufficiently in line with the conditions of application. In our view, therefore, it is crucial to incorporate the question of implementation into the development process itself, not at the end but throughout the process. And although the results show that deciding on a method of development is not a guarantee of success, we still see the development process as key. In this process, however, the focus should be less on identifying evidence to be translated into recommendations and more on identifying the people who have important expertise regarding implementation. The decisive role of context shown by the studies convinces us that it must remain central in the development process.