Episode 8: Can Injuries Really Be Predicted?

Episode-art_ep8.png

Everyone’s racing to figure out injury prediction, but can it even be done?

Sam Robertson chats to experts Dr Ken Quarrie and Professor Sergio Fonseca.

Listen and subscribe on:
Spotify | Apple Podcasts | Stitcher | Overcast | Pocket Casts | RSS

Injury – it's one of the biggest problems plaguing the sports industry. Aside from the obvious potential for physical pain, injuries are to blame for a lot of shortened careers, lost matches, and millions of dollars in potentially avoidable spending. It's no surprise then that sports organisations are eager to find that magical technology – the one that can predict (and avoid) injuries with genuine accuracy.

But is this even possible? There are many who say it is, or indeed that they're already doing it, but Sam and his expert guests aren't so sure. In fact, Sam's had a bit to say on this topic before – see here and here.

We're first joined by Dr Ken Quarrie, Chief Scientist for New Zealand Rugby where he helped develop the nationwide injury prevention program, Rugby Smart, which has since been adopted in countries across the globe.

Next up, Sam speaks with Professor Sergio Fonseca from the Department of Physical Therapy at the Universidade Federal de Minas Gerais in Brazil. One of the editors of the Brazilian Journal of Physical Therapy, Sergio's research focuses on applying dynamical systems and ecological approaches to musculoskeletal rehabilitation and sports.

Together, Sam, Ken and Sergio dig deeper into this hot button topic to look at what's stopping us from making injury prediction a reality.

Want to dive deeper into this episode? Start here:

Episode Promo - Both Guests (Twitter Post).png

Full Episode Transcript

08. Can Injuries Really Be Predicted?

Intro

[00:00:00] Sam Robertson: Injury - it's one of the biggest problems plaguing the sports industry. Although many consider injuries simply part and parcel of just about every sport at every level, the ability to predict injuries, and avoid them, has become the pot of gold at the end of the rainbow. And one that sports organisations are eager to find.

[00:00:20] There's something in it for everyone. For the athlete, there's the potential to lengthen careers and save a lot of pain. For fans, it could make for better watching, as the best players are kept on the field. And for organisations, well there's millions of dollars to be saved. 

[00:00:36] Now we normally kick off each episode in a somewhat neutral position - no hard opinions on the question being asked, at least until the end. But to give that impression here would be a little disingenuous. I think this is perhaps the topic I get asked about more than almost any other. And I've made it quite clear, in both public presentations and on social media, just how skeptical I am about injury prediction. We'll link to some of this in the show notes if you'd like to take a look. 

[00:01:05] So one might ask, why bother discussing it again here? Well, first I'd like to think both myself and our team at Track are open-minded to new perspectives. And second, despite my skepticism, and the lack of success in his area thus far, the interest in injury prediction definitely doesn't appear to be slowing down. 

[00:01:26] And perhaps that shouldn't come as a surprise. On the face of it, predicting injury appears a solvable problem. Find some decent data, run a statistical or machine learning model, and use the subsequent recommendations to intervene. But look a little deeper and we find more questions than answers. Can we even get the volume and quality of data needed to accurately predict an injury? How do we act on these predictions? Once we generate them? Is it realistic to expect athletes to not get injured at all? And does seeing into the future have an unexpected downside? 

[00:02:01] I'm Sam Robertson and this is One Track Mind.

Interview One - Ken Quarrie

[00:02:05] Hello and welcome to One Track Mind, a podcast about the real issues, forces and innovations shaping the future of sport. I'm your host, Sam Robertson and on this episode we're asking: can injuries ever be predicted? 

[00:02:25] My first guest is Dr. Ken Quarrie. Ken is a Chief Scientist for New Zealand Rugby, where he has worked since 2000. During this time, he has helped to develop the nationwide injury prevention program, entitled Rugby Smart, which particularly targeted permanently disabling injuries and has now been adopted in Japan, Australia, and South Africa. Ken is a member of World Rugby's Scientific Committee and has published research on a range of performance topics.

[00:02:53] With specific reference to today's question, he has investigated preventing spinal injuries, the effect of compulsory mouth guard use on reducing dental injuries, the types and risks of tackles in professional rugby, shoulder injury mechanisms, as well as standardising methods for collecting and analysing injury and illness information across sport. 

[00:03:14] Ken, thank you so much for joining us. 

[00:03:16] Ken Quarrie: Thank you for having me on the program.

[00:03:18] Sam Robertson: Yeah thank you for coming and there's so much to cover and I think we need to get straight into it, but before we do, something that we don't normally do on the show, but I think that this topic would be actually quite useful is to start with a default position. Now, the topic today is can we predict injury in sport? And if we're asking about, can we do that now? My response is a blanket 'no'. But I guess if we're talking about will we ever be able to, then my response is 'maybe'. In fact, it's probably closer to 'probably'. Just quickly, do you have a  starting position on those two questions?

[00:03:51] Ken Quarrie: Yes, and I think they're probably similar to yours with the possible distinction of what we mean by prediction. If, for example, we have some injury surveillance system in place for a sport, and it's producing valid and reliable measures of injury experience among the participants, then I think we can, at a group level, predict what will happen among that athlete population in the next series of exposures with reasonable confidence.

[00:04:29] So if I look at professional rugby teams, then we have a pretty clear idea of what sorts of injuries have been occurring to what body sites and type combinations and the average severity of those injuries, and within reasonably narrow bounds, I think we could predict next year, what sorts of injuries and the costs of those and how many players would be injured, unless there was some major change to the laws or the structure of the game.

[00:05:03] So, other things being equal, then at a group level, I think we can do a reasonable job, but in terms of forecasting a player being injured in some upcoming match, which player and in which match, no, I think we're a long way away from that. And it's not that it could never happen, but the types of information being collected and the ways that the information is being analysed at the moment, I think we are a long way away.

[00:05:33] Sam Robertson: Before we dig into some of the reasons as to why that's the case and what we'd indeed would need to do in order to make it maybe more of a reality. You talked a little bit about forecasting then, and maybe differentiating that from a term like prediction or considering that as a sub-branch of prediction. I think there's a lot of confusion with the terminology in this space as well. And two other terms that come to mind are 'explanation' or 'explanatory modeling', so looking back, which I think you see a lot of scientific research looking at that, and sometimes it's been called prediction even, which is another conversation for another day. 

[00:06:08] But we heard in the introduction a little bit about your success with the Rugby Smart program, in helping to reduce the incidence of some of rugby's most serious injuries, particularly those to say the head and the spine. Now clearly the emphasis with that work, correct me if I'm wrong, is more prevention than prediction. Do you see those terms sometimes muddled as well? And how well are those differentiated ? 

[00:06:29] Ken Quarrie: I'm not sure what people's  general impressions are of the distinction between those terms. I think that prediction is typically looking at some series of inputs and the relationships among or between those variables and some output, and looking at how t hat input set produces an output value. And it can be as simple as, for example, the relationship between something like height and weight, and you can predict somebody's height from their weight or vice versa, it's just that it's not a fantastic prediction because there is only a reasonably weak relationship. It might be 0.3 or 0.5 as an R squared sort of value. So you can predict one value from the other. It's not fantastic prediction. 

[00:07:24] And in terms of prevention, if we have some general factor that we've identified as being a risk to participants in activity, it may be that modifying that risk can have a population level effect. We don't know which particular people it would impact upon, but over the entire group of people, it may well have some beneficial effect, or otherwise. But that's why we would have some ongoing monitoring to see what the effect of the intervention was. 

[00:08:00] But I'm not sure about the distinction between prevention and prediction. I don't normally consider those two things together. Whereas forecasting, I would see as a subset of prediction that has to do with time series data, or time-bound data. 

[00:08:19] Sam Robertson: And therein lies a little bit of the difficulty. I think in terms of some of these terms, they can be sub features or sub-branches of the same type of discipline, and some of them are quite clearly differentiated. And I think that's a nice, difficult starting point for us today. 

[00:08:35] Let's come back to the top and talk about the problem at hand, which is injury prediction. I think it sounds like we're both starting from a position that it's not really possible now in terms of being a good, actionable, effective prediction. As you talked about then there as an example, yes, we can predict all sorts of things. It's the difference between being able to predict something and being able to predict it well. And I think that was the distinction you made there with height and weight. 

[00:09:00] What I'd like to really talk to you about and get your thoughts on, as someone who's spent a lot of time in this space, what would be required, what are the key developments that we would need to happen in order to make it a reality? I  don't think this problem is going away, the data doesn't suggest that it's going away in sport. And I also don't think it's going away in terms of people's interest in it. It costs the industry a lot of money. It costs people careers, the fans of course want to see the best players and the best athletes compete around the world in various sports. So maybe let's spend some time on that.  

[00:09:32] I think the first key development, and maybe not the first, but a really important development, would be simply acknowledgement and moving the field forward in that we need a collaborative effort here. And  I mean a proper, real, large-scale, interdisciplinary collaboration. And I know that kind of term, interdisciplinary, is thrown around a lot these days, and certainly it's a bit of a buzzword in academia, but  the requirements, I would say really good scientific researchers to determine etiology and models of injuries, different injuries; we probably need entrepreneurs, engineers, manufacturers to develop better technology, to help measure risk factors, computer scientists, statisticians, better computation, analytical tools. Of course, we need athletes and sporting organisations to commit to buying in and sharing and pulling their data. And then of course, that's not even considering decision-makers, who we need to understand more on how they  would act or decide on information or recommendations provided to them. And there's probably more than that. Now,  I think all those groups  in some way are doing their bit to try and help with the injury problem, but I'm not really convinced we're seeing  a global or a large-scale, industry-wide, concerted, joint effort. You might have a different view.  

[00:10:44] That was a fairly long overview of the problem. But perhaps let's start by delving into some of the key areas a little bit deeper and try and identify where we need to move more. So at the start of my little rant there, I talked about models of injuries. So I think what I mean by models of injuries, levels of cause and effect between certain variables and injuries, the differences between types of injuries, the role of the sport environment on injury itself. For example, I might suspect that a head injury or a spinal injury, how that occurs in a sport is quite different to how a calf strain would originate. How much room for improvement is there in this space, in this kind of level of inquiry, in your opinion? 

[00:11:22] Ken Quarrie: I think there is a lot of scope. I agree with the point that you made about the various people and organisations involved all having a part to play. In terms of ways of looking at these issues, one that has been raised, not just within sports injury, epidemiology, or sports injury prediction, but in public health and epidemiology more generally, is that rather than viewing our approach to solving these issues as a reductionist or linear approach, we should be moving to a complex systems approach or dynamic systems approach, or people use various terms. And again, the definitions matter, what people actually mean by that isn't necessarily clear.

[00:12:15] But if we look at an area where there is some complex phenomenon and I would take, for example, weather forecasting, and the sorts of inputs that are required to produce forecasts have grown over the past two decades or so where we have, around the globe, multiple centres, people have put weather monitoring devices all over the globe and bringing in information from air travel and weather balloons, and local thermometers and sensors all over the place.

[00:12:54] And all of this information is constantly being fed into models that work to try and forecast what the weather is going to be like a day or two days or a week or a few weeks And there have been huge investments all around the world in weather forecasting and meteo rology, to try to improve forecasts. And at the moment, even the best forecasting systems pretty much get to 10 days or two weeks and not really much better than a coin toss, and that's with a huge amount of investment. And if I'm thinking about sports injury, epidemiology, I suspect that a lot of the inputs that we use at the moment, piecemeal, aren't necessarily related to the outcomes of interest anyway, and they're collected because we can collect them. They're convenient for us and are a part of ongoing processes that we have.

[00:13:56] So people might be monitoring, for example, how much running players are doing in training sessions or how many collisions they are being exposed to in training sessions and matches, and are then trying to predict whether they will become injured or whether their injury risks are changing, the probability of them being injured in the future.

[00:14:16] And just as you said, I think we need a much, much  bigger data set of information that is actually relevant to the etiology of injuries. So if I look at rugby, which is the sport with which I'm most familiar, most of the injuries occur in contact situations. And as far as I can tell at the moment, the risks for any particular player sustaining an injury in any particular match from a contact situation, almost random.

[00:14:50] Every player is exposed and every player is at risk in every game they play, and yes, some positions, some players, are exposed to more contact situations and have higher rates of injury in general. In terms of taking all of the information about that person prior to some specific game and making a forecast about whether they would be injured in that next game up, that's a really difficult problem because we don't have a way of validating whether that prediction would have been true. So for example, if we hold the player out from play, then they have no exposure and they don't get injured. If they play, then there was a chance that they would have been injured anyway. So maybe at a professional rugby level, the chances of being injured for a given player in a given game are something like 10% per player, per match.

[00:15:47] And we don't know necessarily whether that person, if they did sustain an injury, whether that was just how it would have been or whether the fact that all of their genetic history and training history and injury history and everything that's happened to them up to that point, put them at a substantially higher risk, and whether we would've made a different decision leading into it based on that information. 

[00:16:15] Sam Robertson: You've covered a lot there, in particular towards the start talking about the inputs in some of these models and then towards the end of your answer about the decision making, and I think I'll return to that decision-making component in a moment, but coming back to the inputs. I often view this problem, but also a lot of problems that are in sports science or sports performance, being improvable by having better data, but also more data.  And I think people often raise their eyebrows at the second part there, because I think there's a perception sport that  we're drowning in data, like in lots of industries, but I would argue in some problems we don't have anywhere near enough. And I think  this is one of them. 

[00:16:54] And so when I talk about the quality of data, what I'm talking about is not only the validity of that data for a particular problem, but also the resolution, how accurately can it really detect things like a contact that you just mentioned in rugby. And so it strikes me when thinking about that, it's quite difficult where you could have a situation where a variable, or something measured from a technology or some other means, is actually quite predictive and quite important in and of itself, but it manifests itself in quite a random way in competition, which is where my mind was going when you were speaking then. That's a very, wicked problem, isn't it? I mean, you could have something that's highly predictive, but you have no idea when it's actually going to occur in a competition.

[00:17:38] Ken Quarrie:  Yeah well, the technology has also raised ethical issues for medical practitioners, and strength and conditioning staff, and coaches. So if, for example, you had some instrumented technology that measured every impact a person in rugby was exposed to, and you found that a particular player over the course of several weeks of play had a much higher contact load than, for example, other players in their position or other players in the team, what do you then do? 

[00:18:14] So, for example, if a player had sustained multiple impacts and these were creating accelerations to their head, and you're left in a position of going, well, maybe they're at higher risk of sustaining a brain injury in their next match. Do we play them or do we not? And either way you're in a tough ethical position, because if you say to a player, say it's coming into a final of an important competition, sorry we think you have been exposed to a higher number of brain accelerations than most people, and we're going to pull you out of this next match on that basis. And they say, well, what is the predictive ability of the system to say whether I would have been injured in that next match and you have to shrug your shoulders and say we don't really know, and they can turn around and reasonably say, well, you're restraining my trade. You're not letting me perform at this really important juncture.

[00:19:17] On the other hand, if you did let them play and they sustained a permanently disabling brain injury, people could reasonably turn around and say, well, look at all these warning signs coming into this match, but you let them play. And I think that type of ethical issues still needs to be thought about and resolved. I'm not saying that that exists at the moment, but it's a potential that people, if they are going to use systems like this, really need to start thinking about. 

[00:19:41] Sam Robertson: Yeah, I'm not convinced that it exists a lot either, but I think it probably should. I think, let's face it, that scenario you just outlined is occurring right now, in terms of the first part at least, where people are being ruled out of matches based on, no I'm not going to say flimsy evidence, but perhaps evidence that isn't as strong as it could be.

[00:20:01] And  it does lead to further questions about things like what's an acceptable risk? You know, where's the cutoff lie? If there's some kind of relationship between these markers that have caused the medical practitioner to rule an athlete out of a competition. And there's obviously uncertainty and you use the weather example earlier, there's uncertainty in every prediction anyway. And so how wide is that confidence band? It leads to all sorts of analytical and decision-making type questions as well. 

[00:20:28] I guess that's a nice segue to move into talking about how those predictions or those recommendations are actually generated, and I think there's two parts when we talk about that. There's the algorithms or statistical techniques that are being used and whether they're good enough, so to speak, to capture the phenomena of interest, in this case injury. 

[00:20:47] And then a second, which is a different question altogether, is a computation. And I guess what I mean by that is, injuries happen acutely, a lot of them. Not all of them, but a lot of them happen or at least the onset of them is felt acutely. I wonder whether we're in a position, I don't think we are right now, where we could actually act on a prediction very rapidly in order to intervene and stop an injury. And where I'm going with that  is very future focused, which is terms of being able to have real-time access to data during competition and training, I've got visions of a medical practitioner running out onto a field of play and stopping someone saying, if you get one more impact, you're going to be injured. That seems far-fetched, I know, but I could see this world coming. I don't think it's a world we want to come, but I could see that scenario coming in the future. 

[00:21:33] Ken Quarrie: Yeah, if I think of mechanical models of injury, people conceptualise injuries is being a transfer of energy that exceeds the ability of the body's tissues or structures to maintain their function in structure. And you could subject somebody to a transfer of energy that is such that regardless of anything that they've done up to that point, they will sustain an injury. So there are energy transfers that, if we take a silly example, if somebody drops a hundred metres onto concrete, they will be injured to the point they almost certainly don't survive. That transfer of energy, it doesn't matter how well conditioned that person is, if they're exposed to some degree of energy transfer, they will become injured. 

[00:22:33] Now, as we pull that energy transfer amount back, there are probably points at which the person's conditioning level would prevent them from sustaining an injury that a similar person who's not as well conditioned, they may well have an injury from that same input. And so I think there is some area in the middle there where the person's conditioning levels and everything that they bring to that exposure, do play a part in whether some injury occurs to them or not. For things like maybe muscle strains or non-contact injuries, there may be possibly in the future a greater degree of utility for the systems that we're sort of talking about with prediction. I'm not sure that that would apply to the same extent with contact injuries. For the reason that I just gave, where some of the contacts are such that the energy transfer is just so high that regardless that person will sustain an injury in that contact. 

[00:23:39] Sam Robertson: Yeah and I think when you were speaking about your example then of dropping onto concrete, my mind went to physics, but it also went to simulations, the ability to simulate that over and over again, with people with different layers or levels of conditioning. And of course, we know we can't do that, because we're not afforded the opportunity to experience all these different permutations and examples of competition that would allow us to do it. And even if we could ethically it would present a massive problem by just exposing people to different stimuli all the time. At least with elite athletes, in fact, let's face it, with anyone. 

[00:24:13] The other consideration around analysis that think is quite fascinating here is around machine learning. And I think we've seen, you just have to go to  the injury research and you can see a lot more research now being done on using machine learning methods. And I think a major concern people have called out with using machine learning for injury prediction is that, particularly with the black box type methods, we can't actually work out or determine how a prediction has been arrived at. And therefore it can't influence practice, it can't allow practitioners to refine their trade. 

[00:24:47] Now that is a concern, but it's an interesting question in and of itself, I think. Because, to me, if you had a highly successful and a highly accurate predictive model that worked, do we actually need to know how it works? If it works, it works, and we have confidence in it. It's an interesting question. I think we always want to understand things and we don't want to trust a black box model, but if something's good enough... I'm not advocating one way or the other I've been highly critical of using machine learning for this purpose, but it does beg the question, do we actually really need to know? 

[00:25:18] Ken Quarrie: Possibly not. If your prediction model produces accurate predictions, then who cares how it works, at one level. And so I was just thinking of Isaac Asimov's any sufficiently advanced technology is indistinguishable  (from magic) . I think it was him that said that, I can't remember. But the idea that we've got some system and really it's an oracle. We don't know how it works, but it spits out the right prediction every time. Why wouldn't we use it? On the other hand, it may well work until it doesn't. And if we don't understand why it's working, then as you say, it's very hard to build theory from it or drive action based upon it, because we're always having to go back to the oracle to see what it would say for this next thing.

[00:26:06] And I think we're a long way away from having such in oracle for sports injury prediction. And I think that you mentioned before both data quality and data quantity, and I don't think that having more and more data is useful if it is not the right data. So we can collect data on anything we'd like and have mountains and mountains of data, but if it has no feasible relationship to the outcome that we're interested in, then why bother collecting it? 

[00:26:41] On the other hand, if it is relevant information, then I think at the moment we do have an issue with a lack of data and people feed information into models and trust the models and look at the outputs. And maybe in some cases, I can't generalise cause I'm not sure, but it may be in some cases they are not sufficiently skeptical of their models. So they look at information, it goes in, gets processed, and you get your output. And there are potentially issues with overfitting, if you have small datasets, and there are potentially issues with lack of generalisability, which is the same thing expressed differently.

[00:27:26] But again, I think a lot of the information that's collected and put into athlete tracking systems, is that which is readily collectible, not necessarily that which is relevant to injury etiology. And we still have a long ways to go to understand which factors in fact do relate to injury etiology. And, if it is dynamic, then those factors change over time anyway, because the characteristic, the defining characteristic of a dynamic system, is that the relationships among and between the variables change with time. So would never have a prediction model, we would have prediction models that are continually updated to be providing those inputs to decision makers.

[00:28:12] And again, I think we're a long way away from being in that sort of position at the moment. Often when people say, I think we're a long way away from it, somebody pops up and says, well, here it is, we're doing it while you're saying we can't so.

[00:28:26] Sam Robertson: Well that's, I think that's a nice segue into probably what I wanted to finish on with our conversation today, which was about the human factor in all of this, and perhaps we could humour ourselves to speculate that we do arrive at that situation where we do have a working solution. It doesn't mean we're there, and what I mean by that is it certainly means that we have a decision to make on our hands about how we decide to pick up that model and use it. And again, this is a problem that everyone in sport probably wants to solve or have better outcomes for, but it's still, at least the way we work now, the decision to take athletes out of training or competition, or let them train and compete, is still going to reside with a human, at least for the moment, whether it's a coach or a medical practitioner.

[00:29:12] And so we kind of talked about this a little bit earlier in terms of how someone acts on a prediction, what's their acceptable cutoff, of risk of injury, before they decide to act in a certain way? But are there other things that we need to be aware of here? I think the other one that you mentioned a couple of times today is around the information that's available and I keep thinking about availability bias, when you talk about that, and I think that's a real risk here. You're going to use the data that's available to you, in front of you, and sometimes that means things that are hard to measure or things that we don't want to measure are the things that get left to the side.

[00:29:45] Are there any other risks or other things that we'd need to be aware of if, let's say tomorrow, we wake up and we do have a working tool? Will people actually use it? 

[00:29:53] Ken Quarrie: I don't know the answer to their question. I suspect that the human elements often override what the information to hand would best suggest that they do. And people involved in sports are often, at the professional level of sports anyway, are highly motivated to win, their careers and their professions are based on this. And so if you gave a coach information that said, your three best players coming into this final are at high risk, we advise you not to play them, then I think they're probably gonna look at you and just completely ignore you - if you had a job with them anyway. 

[00:30:40] So I think that the human factors and the human relationships always play a big part in in this, And often times, where people are at is, we know this stuff is not perfect. We know it's best guess. This is our best impression of what we should do here, and it's some combination of art, experience and science, and that's where we are and that's the basis upon which we will make decisions and operate. 

[00:31:11] And again, I would look at the analogy of screening procedures for diagnoses of some outcome in medicine. We would have information about some risk factor or combination of risk factors, and then be trying to make decisions based on it. Sometimes the decision will be right, and so it can either be a true positive or a true negative, and sometimes it'll be wrong, we'll make errors, a false positive or a false negative error. But one thing I don't think people quite get is just how strongly related those risk factors have to be to be useful in terms of producing a good decision algorithm or a good screening procedure. And I would come back to that, the fact that we might be using some sort of complex modeling techniques, or we might be using some sort of linear, mixed model or something like that to produce our algorithm upon which we will base our decisions, the mathematics still apply of applying that to the screening or to the decision that you then make. And I don't think we have anything in sports injury, epidemiology, that currently allows us to make good, defensible decisions about whether somebody should play in the next game up or whether they will be injured is a long way away. 

[00:32:34] Sam Robertson: And like so many things in sport, it's that balance of risk and reward.  I do think people sometimes do overestimate the risk and maybe they also undervalue, underestimate, the reward of certain circumstances. Maybe that's a nice piece of research for someone to do in this area. I know we're out of time, but it's been a pleasure. Dr. Ken Quarrie, thank you so much for joining us on the show. 

[00:32:56] Ken Quarrie: Thank you very much, and I hope some people can take something away from there.  I wish I could say things more definitively and be more bullish or optimistic about where this all sits, but I really think we need to be pretty humble in our knowledge at the moment and a little bit skeptical about some of the claims that are made. And understand who's making those claims and why? Because people make money out of selling, for example, injury prediction systems, and that's okay, but we just need to understand what their motivators are as well and why they might be highlighting the benefits rather than maybe the limitations of the systems that they're producing.

[00:33:38] Sam Robertson: All I can do, and our listeners can't see this, but I'll nod vigorously to that. Thanks once again, Ken. 

[00:33:43] Ken Quarrie: Thank you.

Interview Two - Sergio Fonseca

[00:33:49] Sam Robertson: Our next guest is Sergio Fonseca. Sergio is a Full Professor at the Department of Physical Therapy at the Universidade Federal de Minas Gerais in Brazil, where he's also the Director of their Sports Training Center. He received a Bachelor of Science degree in Physical Therapy from UFMG, a Master's in Physical Therapy from the University of Alberta in Canada and his Doctorate in Applied Kinesiology from Boston University.

[00:34:14] He was also an invited scholar at the Center for the Ecological Studies on Perception and Action at the University of Connecticut. Currently Dr. Fonseca is one of the editors of the Brazilian Journal of Physical Therapy, he has published more than 100 articles in peer reviewed journals, and his research activities focus on applying dynamical systems and ecological approaches to the understanding of human movement, especially in the areas of musculoskeletal rehabilitation and sports. Professor Sergio Fonseca, thank you so much for joining me on the show. 

[00:34:45] Sergio Fonseca: Thank you for the invitation. 

[00:34:47] Sam Robertson: We've just heard from Dr. Ken Quarrie with his views on two questions, and I seldom repeat questions to guests, but I think in this context, it's useful to establish a quick starting point before we get into this topic in detail. And the two questions are, firstly, do you think that injuries can be predicted now? And if not, will they be in the future? Just a short response, and then we can kind of unpack those as well. 

[00:35:13] Sergio Fonseca: Okay, so for a long time I thought that it would be possible to predict  injuries and for some time I became a little bit skeptical about that, and now my position is that I'm positive that injuries can not be predicted as people are trying to do. So, in terms of the future, I have a long and short answer to that. I will give you the short answer and the short answer is right now, no, I don't see it coming, or it happening. 

[00:35:41] Sam Robertson: Well, we will bring that up again and  I will get the long answer, but we'll maybe try and investigate that under a few different sub themes.

[00:35:49] We very briefly touched on this in the conversation with Ken, but you've written extensively  on looking at injury through complex systems lens, but also other topics as well. I wonder if for listeners that you can explain briefly what complex systems entails, at least in the application of injury. And maybe after that, we could talk a little bit about how that differs so much to other perspectives or other approaches that we've seen in both practice and in the literature. Because I think if we were going to see advances in this area at all, I personally believe a complex systems approach would be part of that answer. 

[00:36:24] Sergio Fonseca: We can have at least three kinds of systems. A simple system is something that we're dealing everyday, like simple puzzle situations that we try to solve in a linear way, very fast, without much thinking. We have complicated problems that are those which we spend more time trying to understand and basically they are simple problems with many parts. So it's the same nature, a simple problem and a complicated problem is the same thing, but the main difference is based on quantity. We have two main elements that we try to understand how they fit in the big picture. 

[00:37:03] Okay, so this is a class of problems and then we have complex systems. When we go to complex systems, we have complex problems in which the solution to that problem doesn't lie in the elements that comprise the problem. So at every level that we try to assess it, new things come up and some things appear. So when we reach the final problem, the question that we want to solve, it's basically different from the beginning.

[00:37:36] So if I take, for example, a puzzle and I set up a puzzle, if I take it out, that puzzle is made up of the parts that I use to make it. So I have a clear relationship between parts and whole. So the whole is totally based on the parts. In a complex system, imagine that I'm setting up the same puzzle, but when I put two pieces together, they disappear, something else comes up, a new piece that didn't exist before. So I connect that to some other thing, and this will change qualitatively all the time. So when I get to the end, my end product cannot be traced back on the elements that I started with. So this is a complex problem. 

[00:38:19] So sometimes, for some reason, we decide to approach complex problems with the lens of complicated ones, and that makes i t impossible to solve the prorblem. It's an ontological problem. So if I want to understand a system, I must know its origin, its reality. So it's ontology. Because what I see today is that people are trying to understand complicated problems that are complex. So it became an epistemological problem, because I'm trying to create new methods to solve a problem that doesn't live in the same level. So it's not going about epistemology all the time. Sometimes you have to go back and think ontologically. What's the nature of the problem that I'm dealing with? 

[00:39:12] Sam Robertson: I want to pick up on that point you made then, and particularly what's coming to mind, the transition of problems between those states, which can occur as well.  I think sometimes scientific inquiry, when done well, and this applies in sport as well, in many cases can lead from something that appears to be complicated as actually being revealed as complex.  It's almost somewhat of a paradox because as we are advancing our knowledge, we're advancing our technology to be able to measure the world around us, but also the human body ,things that once appeared complicated actually reveal themselves to be complex. And I've always found that it's a little bit disheartening at times that we think we're getting closer to an answer, but we're actually inadvertently getting further away from it in some respects. 

[00:39:55] Sergio Fonseca: That's right. Yeah, makes sense.

[00:39:57] Sam Robertson: So, most importantly of all, you described complex systems quite eloquently there,  far better than I could, but I wonder, what do you think specifically about that? Given the way in which these problems are quite difficult for people to operationalise, which I think we can talk about a little bit later, how we actually operationalise a complex systems approach to injury, but in a nutshell, how do you see this approach being part of the answer to the injury problem? If you do it all for that matter. Why is this going to work better than a traditional approach? What's it going to provide us above and beyond what we've seen in the past? 

[00:40:31] Sergio Fonseca: Okay, going back to what I said before, if we decide to use a complex system approach to solve a problem,  we have to be sure that we're dealing with a complex system, okay. There is nothing wrong about using linear methods and tools, or tools that are dealing with many, many elements, in order to understand it because that happens as well. So the problem is not the method itself, it's applying the correct method according to the problem that I'm trying to solve. 

[00:41:02] So the first question that we have to ask is, sports injury is a complex problem or not? Okay. So we can go back. Are we dealing, in terms of sports, a human practicing sports, okay, movement system, is this a complex, let's say, or an emergent property, that requires different tools to be understood, or movement is something that's, it's just like a (unintelligible) measured before they can put these together and create it.

[00:41:34] So in my view, I cannot think about movement or, by any means, sports injury, without considering it as a product of complex interactions, among many, many elements in several levels. For example, we have here the neuromuscular system, the neural, the musculoskeletal system, and we have the cardiorespiratory system - each of these systems by themselves are complicated or complex enough, too complex okay, and they interact. So I have interactions among complex systems producing movement. So I cannot think that movement or injuries affecting a movement system may not be complex at all. 

[00:42:23] So the first step is, okay, I define injury, a sports injury, as an emergent property of a complex system. So once I define it, I have to look for the solutions that are compatible with this kind of system. So I cannot go back, and even assuming that I'm dealing with a complex system, and use the same tools that I'm, let's say, accustomed, like I'm comfortable, because that was what I was exposed to before. So I have to choose the proper tools to deal with a complex system problem. So basically, if you think that you're dealing with a system comprised of many, many elements that interact in a complex manner among them, you're dealing with a complex system. And then you have to use the proper tools to understand, investigate, or even predict or prevent injuries. 

[00:43:26] Sam Robertson: As you were speaking there, my mind was turning to the way in which  we set up, not only things in medicine but in sport as well,  may not be very compatible with that approach most of the time, I think, but that maybe be  is a conversation for another day. 

[00:43:40] What I really want to pick up was a point you made on emergence, which is obviously a property of complex systems, kind of universally accepted as one. I've read in some of your work, you talk about a user term called 'imminent transition' and I think that's a really interesting and a really nice term that works really well with respect to sports injury. I think that's something that resonates really well with myself, at least. 

[00:44:05] When you talk about an imminent transition, is that simply the process of emergence occurring in complex systems? I think the example that you've given in  some other work you've done is the abrupt phase transition that we see in brain activity, right before the onset of say an epileptic seizure. To me that feels like something that has legs, that has something that we could pick up and use in the injury concept. To me, that makes a lot of sense. 

[00:44:28] Sergio Fonseca: So complex systems are dynamic systems that change with time. So, change is something that we expect. Some of those changes are quite abrupt and difficult to anticipate. For example, sports injuries. However, in complex systems, when we follow their behavior over time, what we see is that before any transition to occur, these systems they present some kind of behavioral change in time.  In social systems, for example, like revolutions in countries. Okay? So it's a complex system. You have many, many people, you have communication, you have exchange of information, you have different levels of context happening there. 

[00:45:11] Before a revolution, you have a very unstable system.  Revolution is not something that is stable. Before this instability to happen, you observed that things are fluctuating. Like, you start to have more perturbations in the system. So the same thing happens, for example, in this example that you gave me, like what happens before a seizure? So you have the brain activity that's stable, and then you start to oscillate, you have to fluctuate. That fluctuation is pointing out that something is not correct. I have an unstable system now.

[00:45:49] Let's say the system doesn't like to be unstable, like countries don't like to be unstable, markets don't like to be unstable. We, during our movement, we don't like to be unstable. So when this  instability happens by means of those fluctuations, something will have to happen or that you will start stability, and that's a change.

[00:46:12] So before any change, any major change, any abrupt change, I have a signal, an early signal, of phase transition. That is the presence of fluctuations. So by observing the system over time, I can pick up the appearance of those fluctuations and that's going to give me the signature that something wrong is happening. So I have the time, let's say, anticipate a change.  So the system is telling us it's story. It's telling us that, okay, it's not perfect here, I need to change. So I have time to observe and act. 

[00:46:56] Sam Robertson: So practically moving forward into a world where we're seeing the necessary advances in all the things that need to advance in order for us to move toward injury prediction, obviously what you're saying there is it's quite impossible to monitor everything in an athlete or to be aware of it, even if they were very self-aware themselves. So another thing you've talked about in some of your work again, is a hallmark of kind of characterizing complex systems, is identifying high order variables that capture the system's state.

[00:47:25] An example you've used, which is used relatively commonly in sport , is heart rate variability. It does strike me that those high order variables are going to differ a little bit, depending on the type of injury. We've had Ken talk earlier on concussion, which again, the change in the system is very ecological that comes to mind. A concussion in a rugby match is coming from a lot of what's happening outside the body, as well as the loss of technique, but also normally what's what someone else is doing as well.

[00:47:54] Whereas something that heart rate variability is going to be particularly relevant to is going to be a little bit different. And maybe heart rate variability is one of them, are there other common high order variables across injury or is this really quite specific depending on the nature and the type of that injury, do you think?

[00:48:10] Sergio Fonseca: I think that finding the proper high order variable or the parameter, depending on the kind of language you're using, is the major task for the future. So for example, if you're worried about the autonomic cardiac function, you can have heart rate variability to index it as a signature of this cardiovascular system. 

[00:48:33] But going back to the thing that I mentioned before, I believe that injury in most sports has to do with movement. So the movement that I observe carries information about the neural system, the cardiovascular system, the musculoskeletal system, even social constraints that may be placed upon the subject. So movement is something that may carry information about all those interactions. So it's my opinion and only my opinion, okay, that we should seek for this higher order variable in the movement that you observed. So the movement, as a means of assessing several systems without knowing the parts.

[00:49:20] So for most sports, that what would be my guess, for example, spinal injuries, concussion, things like that. Maybe, the movement is there as well, because I have to deal with perception, perception of movement, my movement, someone else's movement, because if I'm not aware, I cannot actually properly behave, in a sense, to avoid those injuries. So even then movement can be placed, can be traced back. So in terms of your question, I would say that if we try to look for something that can be applied to most sports, we should be looking at the movement. 

[00:50:00] Sam Robertson: I agree with you, that's a real important way to move forward, but the measurement of the movement and the identification of what... I'm going to use the word 'abnormal', I'm not sure that's the right word, but abnormal movement, because again, what I'm thinking in my head, this notion of functional variability, which we know is a part of highly skilled performers. In some movements, they are able to retain a performance and use variability to the advantage, rather than disadvantage. So it's almost finding what's too much and what's not enough, which is a really interesting question in motor learning, obviously. So it's a difficult problem, there's no question, but I do think it's part of the solution. 

[00:50:35] Sergio Fonseca: But if we look at the movement variability, if you have, for example, a very  regular system or very irregular system, both extremes are not ideal. So the key thing here is that, and people may mistake this, is that I should not look for  s. A global pattern, something that everyone has to f it in. Because we are used to looking for means, standard deviations, normal, abnormal. But in fact, what I should be looking for is my signature, your signature, my athlete's signature.    

[00:51:11] So what correct (unintelligible) the system that I'm dealing with and as the system qualitatively changes and starts to  present fluctuations, so we find that the behaviour over time, for example, if I observed, for example, the interval of each stride during running, or if I look at the regularity or entropy of the movement during a series of squatting, so I can start having, okay, this is a feature of that system. Like this is characterising the typical behaviour of that system. 

[00:51:48] So I can trace it, I can follow it, and then I start to realize that, okay, it's changing. It may change in a regular way. Or it can become unstable, so it fluctuates. So if it starts to fluctuate too much, I know, okay, something's wrong with the system. So I'm not going to look for means or standard deviations or something that can qualify as you're good, you're bad. No. What I'm going to say is that you're changing, something's happening here. Let's go back and examine you to understand what's going on. 

[00:52:24] Sam Robertson: Complex systems, the science around them is not particularly new, but I hope you would agree that we're seeing somewhat of a resurgence in it, in sport at least, which is the main area that I'm reading and working in. 

[00:52:36] Listening to you speak then, I wonder whether purely and simply just the advances in technology, which is something we talk about a lot on this show, are responsible for that. The things that you just described, undertaking a longitudinal assessment of an athlete's complex movement pattern with your eye and recording that reliably, I would not be an advocate for that. I don't think humans can do that particularly well, certainly not to the level of individual degrees. But of course now we have markerless motion captures again, you know, the technology is still getting there compared to other forms, but I think this is an example of where something is moving forward not single handily, but largely due to the increases or the advances in technology and I think that's exciting. 

[00:53:21] Sergio Fonseca: Oh yeah, and actually a complex system approach wouldn't be possible 10 years ago. So now we're reaching a new era that we're starting to have the  tools that will allow us to monitor behavior in a proper way. 

[00:53:36] So, for example, inertial sensors, it's more things that can be placed on your body and inform me about what's going on there.  As you mentioned, like markless tracking systems. Perfect. We have even simple things, for example, the interval between strides, for example, that I can use a watch to track. So maybe things will come that will allow us to be able to trace the behavior and then start to capture its changes before something bad really happens. So it's a new era, like we have actually to marry the two situations, our new way of thinking with the tools that we're actually receiving from technology.  The chance is there, so it's, kind of, be brave enough to apply it. 

[00:54:23] Sam Robertson: We've had a somewhat more optimistic tone  this conversation than I thought we might at the start; I'm pleased with that. Just the last question I wanted to spend some time on today, like we often do, is look into the future. We kind of talked about it a little bit there, but what else is going to be needed or what else would we like to see to move this area forward? I think the technology will continue to be one of those things and one of the main drivers.

[00:54:49] I also think we need to work on the adoption. I mean, it's inherent in a complex system that it's going to be difficult for some people to understand, interpret, and then act on or operationalise. So that's relevant for the gap between research and practice, I suppose, which is true for lots of disciplines. What else do you think would be needed to move this work forward apart from those types of things? 

[00:55:13] Sergio Fonseca: Things that may make it difficult for people to adopt a complex system approach, may be related to the fact that people, they don't want to change the way they think, that's the first thing. So, I'm comfortable with my statistics, I'm comfortable with my causal relationships, so why bother take this out and assume that I'm dealing with something that I cannot know the pieces. So we're all immersed in a very reductionist view. So we all believe that if I know the pieces, I can put the whole together. And this is culturally a first level  scientific endeavor for everyone. So to assume that I'm dealing with a system that I cannot know the parts or the elements that are  important for me to understand its nature, it's quite difficult. So it's the first thing. 

[00:56:10] The second thing is that, for the practitioner, they don't want you to think about that. They want you to have specific guidelines that they can implement and make it work. They don't care about if  it's a complex or simple or anything. Like they just want to know, I can save my athlete from injury and can have them playing most of the time. So that's all that matters - if my team may or may not win because this guy was out. What they need is to have a tested implementation of a complex system approach that can give them confidence that, okay, I can use it because it's working.

[00:56:48] So before it goes to the practitioner,  it's our problem to establish a viable approach that can actually show that it's effective in, not preventing injury in the regular sense, but showing that my athlete is going through a phase transition, going through something that may not be normal, and give me the chance to mitigate possible problems.

[00:57:18] So it changes a little bit of the things like, because it's not based on known factors, but based on the fact that my athlete, my specific athlete, is changing. So what I should do as a practitioner, that a complex system approach indicated that that athlete is kind of, uh, going through a rough phase, is to be able to take him out and use an assessment that's in accordance to my field of expertise so I can point out what could be wrong. So I may have psychologists, I may have physical therapists, I have trainers, that go there to see, okay, what's going on there? Like, is there something wrong in the musculoskeletal system? Are you having problems at home? Not sleeping, well, oh, your nutrition is really poor. So I can pick up that something is wrong, but I cannot say what is wrong without further assessment, that's going to be specific to the practitioner and maybe multidimensional. 

[00:58:25] There are steps to be done that is to show to the practitioner that we can actually show that something's wrong with the athlete,  and then they have to be able to use their skills to identify within their frame of reference what that might be. It will be different for every athlete, it's not going to be the same for all, but specific for that athlete that they'll have the chance to examine detail and try to understand what's going wrong. So a lot of good things to come to the practitioner, because he will be valued in terms of (unintelligible) skills, to understand possible changes or problems that these athletes may have. 

[00:59:11] Sam Robertson: I think  that's an optimistic tone to finish off on, and it sounds like there's some things happening now, that you outlined then, that we are seeing in the field. We are seeing those conversations happen. It's just adding that extra step, isn't it? To what comes next. And this might be part of the answer there. And it does sound as well, that it's more than just the science and it's more than just the practice at the coalface, it's a very multidisciplinary and maybe even interdisciplinary problem to help with this. 

[00:59:37] On that note, I'd like to thank you, Sergio, for joining us on the show and sharing some of your insightful perspectives on injury. 

[00:59:45] Sergio Fonseca: Oh, I thank you for the opportunity to bring about these views and I hope, with your help, I can actually bring people to the complexity side and have more people investigating this problem with a new lens. And maybe we can actually work together and see what happens there in the future.   Thank you very much.

Final Thoughts

[01:00:11] Sam Robertson: And now some final thoughts for me on today's question. Can we predict injury in sport? For me, the answer is no, at least not yet. As we've just heard, there are a whole multitude of reasons as to why. In order for this to change, large-scale improvements are required in not just one, but multiple areas, including technology, analysis, and human decision-making. That will take concerted, collaborative, large-scale initiatives. And most importantly of all for current practice, it's going to take time. 

[01:00:43]So this raises a follow-up question for those working directly with athletes now. If injury prediction, isn't an option, then what do we do in the meantime? The problem is simply too large and important to be ignored. While some will get the chance to focus on working towards this ultimate goal, for most clinicians and practitioners working at the coalface, it simply isn't their job to lead large-scale scientific investigations or to develop the next big thing in technology. 

[01:01:12]So as end users, they're stuck with what they have now - best practice, which is unfortunately also sub-optimal practice. Whether it's interpreting risk factors from the literature, or simply developing more reliable intuition, it's sadly clear that none of these activities are guaranteed to improve injury rates beyond anything other than what we see now.

[01:01:35] But there is another thing that practitioners can do and that's prepare. Collect the highest quality data, particularly that which captures higher order variables, and keep organised, longitudinal records of this data. So if, and when the day does come that a viable prediction method is possible, we're ready to take advantage of it.

[01:01:56] I'm Sam Robertson, and this has been One Track Mind. Join us next episode, where we'll be asking: Athlete tracking, how much is too much? 

Outro

[01:02:07] Lara Chan-Baker: One Track Mind is brought to you by Track and Victoria University. Our host is Professor Sam Robertson and our producer is Lara Chan-Baker - that's me! 

[01:02:17] If you care about these issues as much as we do, please support us by subscribing, leaving a review on iTunes, and recommending the show to a friend. It only takes a minute, but it makes all the difference. 

[01:02:28] If you want more where this came from, follow us on Twitter at @trackvu, on Instagram at @track.vu or just head to trackvu.com. While you're there, why not sign up for our newsletter? It's a regular dose of sports science insights from our leading team of researchers, with links to further reading on each episode topic.

[01:02:48] Thank you so much for listening to One Track Mind. We will see you soon. 

Previous
Previous

Episode 9: What Is Innovation In Sport?

Next
Next

Episode 7: What Technology Do We Wish Existed?