How Does Clinical Research Work? A two-part Primer. Part 2: How to Do a Study, and What Should We Measure?
Developed by CP-NET, an Integrated Discovery Program carried out in partnership with the Ontario Brain Institute.
Part 1 of this Reflection outlines the purpose of this report. It highlights how we ask research questions, and describes a number of research designs that can be used to study things. Part 2 focuses on issues in outcome measurement and generalizing findings from one study to the next. We hope that this brief ‘25,000 foot’ overview of some issues in research in the field of developmental disabilities will provide readers with some insights and ideas the next time they hear about new research. We encourage people to ask analytically critical questions and reflect on whether the findings are credible, relevant and important to them.
“Why does research take so long?”
Research almost always takes longer than either the researchers or the people in the study would like it to! This is because each step of the process requires careful thought and planning, and each can be complicated by factors outside of the control of the researchers. Space does not allow us to go into detail, but the following points are some of the key milestones in developing, doing, reporting and acting on the findings from research.
Developing the research question
As noted in Part 1, asking a ‘good’ research question can take a considerable amount of time and effort. We need to know whether the question is important and relevant, and whether the answers we are seeking are already known in ‘the literature’. This can take weeks or months of background exploration.
Getting the question ‘right’
This part of the research process requires ongoing discussion among the research team. The reason is that there are always many questions, and many possible ways to ask those questions, and many possible ways to develop the study, as described below.
Writing and submitting a grant to get funds to do the work
Again, this is a complex process that usually requires months of writing and editing before the grant proposal is ‘ready’. At that point the grant is submitted to an agency that might be interested to support the research. The agencies’ policies all involve ‘peer review’ – the assessment of the proposed research by people not involved in this study who are experts in the topic and can judge the merit of the ideas being proposed. It is only after recommended approval of the grant (often at best only 15-20% of all those that are submitted) that the researchers are notified and receive the funds for which they have applied. At that point the study can move ahead.
Obtaining Research Ethics Board (REB) approval
All research with humans (or animals) requires careful consideration of the possible risks and benefits of what is being proposed. REBs are usually university- or hospital-based Published and Distributed by CanChild Centre for Childhood Disability Research committees of experts whose responsibility it is to assess the risks and benefits of what is being planned, in order to protect people (and animals) from harm. This process often takes a few months, and usually only starts when the research funds have been obtained.
Doing the study
Obviously the time required to do the study depends on the questions being explored. One major challenge in human studies is to be able to recruit enough participants to the study to make it a credible piece of research. Depending on the time and other demands of the study, the people invited to participate (including parents) have to make their own decisions about competing demands on their time and resources. Thus there are times when a study may take longer than originally planned in order for the study to be ‘big enough’.
“What do we mean by ‘Sample Size’ – and how does it influence the results?”
An important consideration when we read the findings of any study is to know whether the study was large enough to allow the researchers to be confident that their findings are valid. This is because when the number of people in the study is ‘small’, there may not be enough observations to allow for an appropriate statistical assessment of the results.
On the other hand, when the study has very large numbers of observations one may find results that are ‘significant’ from a statistical point of view, even when the findings are not important for the lives of the people who were in the study (what is called ‘clinical significance’). In other words, we need to know whether the findings are important enough to people that there should be changes in the way services are provided, or whether the findings are mainly of interest to researchers.
Analyzing the Data
There are a number of detailed issues concerning the way in which data are analyzed. One of the time-consuming aspects of this process is the need to start the analyses, explore the findings and then (often) go back and undertake further detailed assessments of the data to be sure that the findings make sense. Discussion of these issues, while important, is not presented here.
Reporting and sharing the findings
There are many ways that the results of a research study can be shared. The traditional way has been to write and submit a paper to a scientific journal. There, like the grant proposal, expert ‘peer reviewers’ assess the work. Their responsibilities are to judge the research quality, the importance of the findings, and the clarity of the reporting of the findings. This process usually takes months and sometimes longer.
In the current era researchers also often write plain-language accounts of the research study and send these to the people who participated in that study. This is seen to be a very important responsibility of researchers – a way of thanking the people who were in the study, allowing them to know what that study found and what the researchers think the results mean.
Measuring Outcomes: What should we assess, how and why?
The issue of what ‘outcomes’ should be measured in research is a large topic, and will be discussed here only briefly.
The choice of the ‘right’ measure(s) depends completely on what the question is – as we outlined in Part 1 of this ‘Reflection’. Imagine, for example, that we are interested in improving people’s mobility. We want to offer an intervention that is likely to make things ‘better’. What should we measure?
For example, after the intervention perhaps we might want to measure:
- people’s speed of getting around,
- how much energy people use to get around,
- how people feel about getting around,
- whether people are doing more activities and participating more now that they can get around better.
The list of possible outcomes could go on even further! Each of these ‘outcomes’ might be appropriate – IF that was what we decided we wanted to know when we planned the intervention. However, if we want to know if the intervention helps people do more activities with improved mobility, and then we only assess how they feel about their changed mobility, we might get some good information – but will not have answered the question we started with!
Measures are tools that work to do specific ‘jobs’. We need to know that the measures we choose are the right tools for the questions we want to try to answer. If we use the wrong tool, we may well get the wrong answer. If we use a tool that doesn’t ‘work’ for that job, we likely will not be able to answer the question we started out to address.
Thus, for example, to measure changes researchers need tools that have been shown (‘validated’) to be able to assess change when it happens. To predict a person’s future status we need tools that have been demonstrated to be able to predict. To know whether people have more or less of some characteristic we need tools that can discriminate… and so on. When reading research critically it is important to consider the tools or measures that were used to arrive at the answer for the research question and whether it was a good fit.
When looking at what was measured in a study, it is also important to know whose perspectives were considered. There are times when the outcomes that are measured are important to doctors and therapists (e.g., Does Botox make a difference to the degree of spasticity or the range of motion of the joints?) but these same outcomes maybe of less interest to parents, who in fact want to know whether their child can now climb the stairs on the playground equipment.
“Why can it be a challenge to apply research findings from one study to a different situation?”
One very useful way to think about clinical research is to try to imagine what factors might influence the study design and interpretation. We call these factors “sources of variation”, and they can be thought of as the “Yes, but what about…?” ideas that we often think of when we hear the results of a study.
In studies of childhood disability such factors might include children’s age, sex, the “severity” or “complexity” of the condition, as well, of course, as the nature of the condition itself. For example, doing a study that compares the outcome of an intervention in children with cerebral palsy with the outcome of children with muscular dystrophy would rarely make much sense. Equally challenging would be to try to assess and compare the impact of a treatment with infants with cerebral palsy to the impact of that same treatment with teenagers with cerebral palsy. In these examples the impairments, or the ages, would be expected to influence any outcomes, and therefore have to be recognized when interpreting the research.
The challenge of trying to generalize from one study to another is common. If an intervention works well with children with a less severe form of a condition, we cannot automatically assume it will work as well with children who are more affected by that same condition. This means that we often have to try to replicate the findings from a successful study with other children than the ones with whom the work was originally done. Similarly, a medication that might work under one circumstance might not work under another and may need to be studied in the new circumstance.
A related and somewhat frustrating issue concerns the difference between ‘can it work’ and ‘does it work’. The difference between something that is ‘efficacious’ (meaning it does work under the best of circumstances) does not mean that it will necessarily be ‘effective’ (meaning that it works in the ‘real world’) when offered more widely. If, for example, a walking brace allows people with an impairment to walk better on smooth surfaces, but interferes with their stair-climbing, or is very expensive because it has to be custom fitted and made by hand, people may be reluctant to buy or use it, even though it is known to ‘work’ when used in controlled situations. Thus, it is always important for research consumers to be able to differentiate between the efficacy and the effectiveness claims when a specific intervention or treatment is promoted.
To summarize, consumers of research – families, clinicians, other researchers – can become critical thinkers about what they hear and read, in order to decide whether new findings are credible and relevant to their issues. This essay has been prepared with that goal in mind. The author, and CanChild, thank the people who offered their input, and we welcome further feedback.
Want to know more?
For questions about this Keeping Current, please contact Dr. Peter Rosenbaum. This Keeping Current was developed as part of the Ontario Brain Institute initiative CP-NET. (www.cp-net.org)
Interested in Reading More?
1. Streiner, DL, & Norman, GR. (2009). PDQ Epidemiology, 3rd Edition. Shelton, CT: People’s Medical Publishing House. 2. Peninsula Cerebra Research Unit (PenCRU). (2014). http://www.pencru.org/ - See What is Research? Section.
2. Peninsula Cerebra Research Unit (PenCRU). (2014). http://www.pencru.org/ - See What is Research? Section.
The author gratefully acknowledges and thanks his colleagues Dayle McCauley and Dianne Russell, and parents Francine Buchanan and Oksana Hlyva, for their time and thoughtful feedback on this Reflection.