Episode 1: AI in Sport - Help or Hindrance?

Dr Ellen Broad and Dr Xavi Schelling help us figure out the benefits, dangers and future of artificial intelligence in sport.

Listen and subscribe on:
Spotify | Apple Podcasts | Stitcher | Overcast | Pocket Casts | RSS

What, if any, are the limits for artificial intelligence in sport? Will AI help take sport to new levels of exciting never-before-seen gameplay? Or will it cause sport to no longer be considered ‘sport’ at all?

To break things down, host Professor Sam Robertson is joined by two brilliant guests. First up is Dr Ellen Broad, renowned artificial intelligence expert and author of much-awarded book 'Made By Humans: The AI Condition'. Next, Sam chats to Dr Xavi Schelling, Director of Sports Science and Performance for the NBA's San Antonio Spurs. Together, Sam, Ellen and Xavi discuss the benefits and dangers of AI, how it affects the sports industry and people working within it, how it's currently being utilised to track athletes, and how we might make better use of it in the future. 

Want to dive deeper into this episode? Start here:


Full Episode Transcript

Intro 

Sam Robertson (Host): [00:00:00] Artificial intelligence, not that long ago, the term conjured up images of sentient robots and self-driving cars. And while some of these visions of an AI future have yet to materialise, an AI-saturated world is now very much the reality. But what exactly is AI? 

[00:00:16] Practically speaking, there is no universally accepted definition. But what is often being talked about are recommender systems and machine learning algorithms. With near automation, these tools make predictions, recommend courses of action, or simply make processes more efficient. The progression in AI has opened doors we never thought possible in every single industry.

[00:00:38] But does AI belong in sport? From customisation of the fan experience to predicting the next draft talent, advocates point to its ability to harness troves of data and the potential for faster, more objective and more accurate decision-making. Meanwhile, detractors emphasise the dehumanisation of sport and the looming threat of job losses as AI replaces human labor.

[00:01:00] Those in the know, perhaps take bigger issue with the potential for systemic, and often prejudicial, bias to be built into AI's very core. So what, if any, are the limits for AI in sport? Will it help to achieve new levels of exciting, never-before-seen game play? Or will too much cause sport to no longer be considered "sport" at all.

[00:01:20] I'm Sam Robertson. And this is One Track Mind.

(Music Interlude) 

Interview One - Dr Ellen Broad 

Sam Robertson (Host): [00:01:24] Hello and welcome. One track, mind a podcast about the real issues, forces and innovations shaping the future of sport. Actually today, I wish you all a particularly special welcome as it marks our very first episode of the podcast. I'm your host, Sam Robertson. As a professor of sports analytics at Victoria University in Melbourne Australia, and as one of the  directors of Track, a large portion of my life revolves around sport. I consider myself very fortunate to be able to spend a lot of time reading, writing, and thinking about some of the most pressing issues currently facing the industry. Each episode, I'll be exploring some of sport's biggest questions alongside two expert guests, as we try to flesh out what is the future of sport?

[00:02:12] On this episode, we're asking: artificial intelligence in sport - help or hindrance? My first guest is Dr. Ellen Broad. Ellen is a senior fellow at the 3A Institute at Australian National University, which focuses on everything from advanced robotics and autonomous cars, to smart grids and machine learning. Ellen has spent more than a decade working in the technology sector across Australia, the UK and Europe, in leadership roles spanning policy standards and engineering for organisations such as CSIRO's Data 61, the Open Data Institute in the UK, and as an advisor to UK cabinet minister Elizabeth Truss. She's a frequent keynote speaker and writer on AI governance issues, and is the author of the much awarded 2018 book 'Made By Humans: The AI Condition'. Ellen is also the co-designer of a board game about open data that is now being played in 19 countries. I'm really interested to hear about how someone very much at the cutting edge of AI sees its future as it relates to sport.

[00:03:13] Ellen, thank you so much for joining us. 

Dr Ellen Broad: [00:03:14] Thank you so much for having me. 

Sam Robertson (Host): [00:03:17] So before we get into any topic in great detail, I think it's important that we talk about AI itself. I've lost track of the number of different ways I've heard it described, and you probably are the same, but I also heard some reasonably strong arguments to suggest that the term is in fact problematic.

[00:03:33] I noticed the way you describe AI in your recent book 'Made By Humans' is quite similar to what I outlined in the introduction. Is this still how you would define it today? 

Dr Ellen Broad: [00:03:42] Yeah, pretty much. It's been two years since the book was published, and I think my advice remains, to anyone speaking about artificial intelligence, is to ask for a more specific term to describe what they're doing because there typically is one.

Sam Robertson (Host): [00:03:58] I mean, do you have a view on what those buckets outside of that would be, for example, is it prediction that you're talking about, is it automation, is it machine learning, or do you have a view on that or is it really quite a continuum rather than a classification per se?

Dr Ellen Broad: [00:04:11] I think it's a continuum. I remember somebody telling me a useful way of thinking about artificial intelligence is that it's whatever a computer can't do yet. Because as soon as it is something a computer can do we give it a name, like machine learning or virtual reality or drones or robotics, and a lot of these terms overlap. And often, when we're talking about artificial intelligence, we're usually using it to refer to things that involve the replacement or augmentation of human labor.

[00:04:47] So I like your use of the word 'automation' just then, because I think quite often where we get really kind of excited about it and where it preoccupies media debate, policy discussions, the sense of innovation and disruption, is where we're proposing to automate processes using different computational techniques that once they are implemented, get their own name.

[00:05:16] So I like the, we call it AI when it's something that a computer can't quite do yet.

Sam Robertson (Host): [00:05:24] I haven't heard that and I quite like it. And I think that's something I might, I might use in the future. Just as you were speaking there, I'm thinking about perhaps the preoccupation we have with the future application of AI, and it was kind of inherent in your answer then that maybe the media does tend to focus on things that maybe aren't really a problem yet.

[00:05:41] Do you think that it's useful to still keep our eye on those bigger picture things, particularly as they affect us ethically and us as a society moving forward? Or is it okay just to talk about automation? Do we need to keep our eye on that, particularly in the broader society for the implications it has for us, particularly with our privacy, ethics, automation, which could lead to job losses, is it premature to worry about that yet or is it something we should be talking about? 

Dr Ellen Broad: [00:06:05] So the founder of our institute, distinguished Professor Genevieve Bell, who's like a kind of very famous digital anthropologist, said something to me while I was writing 'Made By Humans' a few years ago, where she said that the fears that we have about technologies, are a reflection of long-standing human fears. Fears of being irrelevant, fears of being replaced, fears of losing agency. And those have been a part of how we approach the business of work, how we approach leisure, how we approach our relationships with governments, in ways that predate the current suite of technologies that we call AI. And I think perhaps the comment that I'd make is: it is not that we should be more afraid of artificial intelligence. It is that the level of fear we have, I think, is a healthy one, that forces us to have conversations, perhaps not about the role technology plays in our lives, but about our relationship to things like labor. Like what does it mean to do meaningful work? What does it mean to have the ability to challenge decisions made about you?

[00:07:15] So, I think we're starting to move into those conversations and away from Elon Musk's "What about the AI that takes over the world generating strawberry fields forever". I think we've moved past that. And we're now starting to talk about it more in terms of humans. 

Sam Robertson (Host): [00:07:33] It's interesting that response, cause it picks up on something else I wanted to talk to you about, which is really related to this in terms of whether it's an education problem and whether we need to raise a literacy, so to speak, of people's understanding of not only AI, but technology in general. I'm reminded of the old Carl Sagan quote about, you know, how we've got this kind of society that's completely reliant on technology, but no one understands how it works. Now that's what, 40 years ago now, but it's the same with AI. But I wonder just with your response there, it may be less important that people are aware of how it all works and more important about what the implications are for them, and in terms of the biases that are built into certain machine learning, for example. And a question I'm really  fascinated with, for example, is bias in algorithms versus bias in humans. Obviously bias is everywhere and in any types of decision-making or judgment and I think it seems to be those bigger questions that you just raised are far more important than maybe having a better understanding or high literacy levels in the public of what AI actually is.

Dr Ellen Broad: [00:08:32] I guess you made two points there. I might tackle them in reverse order. Cause I'm really keen on what it's been like for you working in sports and how you approach the kind of literacy versus comprehension of implications issues. Cause you've worked with cutting edge technologies in sports where you must face into these everyday. So I'm really keen to hear how you've managed it. But just quickly, working back from the bias issues, like it's completely true that we have bias in the technical systems that we use in the same way that we as humans have manifested bias in different ways.

[00:09:05] What I get frustrated about is the kind of false equivalence that we place between those two. Like, well, humans are biased, so if our technology is biased they're just replacing another kind of human bias. Because one of the things that is very different about the design of technical systems is that our systems are designed to scale as technical systems. So that something that I design in Canberra, Australia could be deployed in every state and territory in Australia and in other geographical jurisdictions. So whatever I've put into that, could be applied to more people in more contexts than I could reach as me, Ellen Broad, deciding what athlete is fit to play.

[00:09:50] So that's why it's just really important that we're talking about these issues in technologies with great seriousness, because where a system is not working as intended, where it has error, where it is manifesting certain kinds of bias towards certain societal groups, for example, the effects at scale can be more devastating.

[00:10:11] So I think that's kind of why it is rightly such an area of focus at the moment. To your point about literacy, the thing that I think about is there are so many things in my life that I genuinely have no idea how they work. Like I have no idea how to wire my house. I have no idea how to build a house. I have no idea how to make clothes. I have no idea how to do most of the things that I rely on day to day, which are all forms of technology. We kind of now just talk about technology as being cutting edge computer systems, but technologies extend to sewing and cooking and house construction and vehicle manufacturing. And it's not that I need to know how any of those things work. I don't know how to take a car apart and name every part and put it back together. But I know the sounds it's making when it's not working. And I know the instinctive levels of distrust I get when I take it to new mechanics and then they say it needs all of these parts replaced. So it's like we get a feel for the kinds of implications or knowledge, and also the kinds of professionals that we can talk to in all these settings. You know? And I think the difference at the moment in AI is, one, it's still very young. Like computing in general is less than a hundred years old. So we're still kind of in awe of its potential. We were like that with electricity. I remember...[laugh] I remember 250 years ago when I was born. No, I remember that in early discussions about electricity, you know, we thought it could do everything. Cure cancer, cure blindness, treat women for hysteria, and we've obviously changed a great deal in our response to that technology over time. I think something that makes implication gaining harder with computing technologies is that we have explicitly designed them to make the work involved invisible.

[00:12:10] It is very hard for members of the public to even get a sense of it being constructed by humans because all of the work, and what makes it magic, is that it just seems instantaneous. And, you know, we talk about it like magic spells. We kind of, you know, it's peanut butter and goblins just sticking things together. So I think that makes it really hard for people to do the kind of human smell-sniffing-sound detection that we do with other kinds of technologies. But I'm really interested for you, like how does that actually work out in practice? Cause this is me kind of saying, this is what I think it is, but what's it been like for you, working with it in applied settings?

Sam Robertson (Host): [00:12:52] Yeah. I mean, there's a whole, like every industry, the technology adoption in, and the computing systems in sport, has exponentially increased. But before I answer that, I think you mentioned something about the nature of technology then and I always think of the way that Don Norman refers to this in terms of surface represented technology versus internally represented technology. And I think a hammer is a surface representative piece of technology. You can look at a hammer and you can know what it does and how to use it just by looking at it. And something of, I'm not the first person to notice this, but certainly in sport, like most industries, as you've rightly pointed out now, more and more types of technology we're using are internally represented. You can't find out how it works or how to use it unless you unpack it or take it apart. And so in sport, we haven't really dealt with that very well as a whole. We're still grappling with it. We have whole jobs now that are set up to monitor or utilise one piece of technology, of which 99% of the workforce don't know how it works, and sometimes even why it's important. So, again, I don't want to be totally self-critical on sport because I don't think a lot of industries have been well-prepared for that, but certainly there hasn't been enough sitting back and taking stock of where we are, where we're heading, why we're certainly taking up certain technologies.

[00:14:11] And again, this is a complex, complex world, like lots of industries and lots of fields are and there's pressures on sporting clubs to keep up with the Joneses more than most sports. And so they're also not exempt from the whims of salesmen and people trying to pedal products either. And so there's a whole range of considerations there. And let's not forget management owners, boards in sporting clubs, are invariably not trained scientifically, and they're certainly not trained technologists. So it's not any wonder we are in a little bit of, uh, an interesting position there. But I'm not going to be too negative on my own industry, I guess it's like this everywhere. 

Dr Ellen Broad: [00:14:47] I don't even think it's negative. It's like a reality. I mean, sports more than any other industry is explicitly about winning. I mean, you could challenge me on that, but like you're rewarded if you win and you get more jobs if you win. And so you can't really ignore potential technologies that could have an effect because you might lose your edge. So I don't think it's being hard on your industry. I think it's like something that is just... I think every industry to a certain extent has that with developing new technologies generally, but in sport, in particular, it feels like when your potential future is predicated on the success in quite an explicit way of what you're doing now, it's very hard to ignore these things or say "Let's just take a few seasons and like, think about it." 

Sam Robertson (Host): [00:15:35] Yeah, that's again, it's not unique to sport, but it's certainly a characteristic that other industries don't always have. It's very, very fast paced and very short memories a lot of the time as well. I think one of the great things about AI is it's pervasive and it's literally being applied to just about everything we do as humans now, but I'm really interested to hear your thoughts on where it could potentially be used best in sport. And particularly what are the implications for lots of stakeholders. I mean, we just talked about a few then, athletes versus coaches versus franchise owners, media fans, they all have different interests. I mean, you just talked about it being that sport is there to win and certainly athletes and fans and probably coaches think that, but potentially other stakeholders are trying to make money. And that might be a little bit cynical, but that's really their main goal. Do you have ideas about what you've seen in other disciplines about what implications of, say, a more AI-friendly sports industry might mean for some of those stakeholders or what it could potentially mean.

Dr Ellen Broad: [00:16:34] So, I'll say two things. One is, so in other industries, because I can talk about other industries with much more expertise than I can talk about sport, but then I can definitely have a crack at sport given my family is sports mad and I can reflect and what I've seen work. But one of the things that I think we don't talk enough about, is that quite often some of your most game-changing applications of AI - so say we're talking about like using massive quantities of data to speed up or automate certain aspects of how you might complete a process - is that quite often, AI is really helping us out in deeply unsexy ways. Like the administrative side of sport. Processing databases of gameplays, changing the way that we, I know, for example, in HR to give you a case study that we've looked at, it's not that AI necessarily gives us great advantages in knowing candidates better. Because it's just really hard to know how people are going to perform in a workplace for a bunch of reasons, unrelated to the quality of their interview, but can help us automate other parts of this otherwise deeply painful process. Like randomly allocating CVs to humans to look at, or creating a more streamlined workflow so it's easier for humans to get to the bits of the applications that are really important to deciding who should be a candidate for a job. And so that's not replacing the human looking at the other human, it's like making it faster and more targeted and easy to find the bits that you need.

[00:18:14] So I think there's likely, you're probably thinking of them right now - I can see you smiling because we're on video - that there are many, elements of processes in sport that are not the magic spells, sexy bit at the end where you're like I can predict when you will have an injury exactly to this extent, but you could automate parts of the process that make injury prevention easier and more targeted for humans.

[00:18:39] So I think unsexy processes. We get a huge kick out of using massive quantities of information and kind of more sophisticated computing techniques that just help us speed those up. So, and even just things like in sports, I imagine you would focus as much as in other sectors on things like the effects of weather and ground conditions and ball bounce, and a range of aspects of the game, depending on what game that is, where these are more measurable using a range of sensors in ways that perhaps are not so kind of calling on us to make assumptions about what the data is telling us. Like, if you're just trying to understand quality of ground conditions, effective weather, they're kind of more, I hate to use the word neutral, but we can collect lots and lots of information about them, and we can use them as a close proxy for what we're trying to measure. I think it gets harder in sport, as it does everywhere else, where we're trying to use what we can measure with sensors to stand in for something that's actually quite hard to measure in practice.

[00:19:52] So I used job recruitment as the previous example where we like try to measure, using video technology, the extent to which people smile or make eye contact as a measure of say they're empathy and awareness, and those are not the same thing. So I can imagine you have some contexts in sport as well, where some things are quite easy to measure, like strain on the knees, connecting with the ground, but if you are trying to measure, like, endurance in a more human sense, maybe you get into stickier territory. I don't know what you've found. 

Sam Robertson (Host): [00:20:30] Well, it's particularly an issue on some of the human factors that you mentioned then. We just don't have, and again I'm not a psychologist, but we don't have great ways through technology of measuring some of those characteristics, which we know are important to sports performance - resilience, communication, all these types of things. And so we really struggle with availability bias. It's very easy to focus on things that are easy to measure from technology, which is largely the physical components of sport. So we're very good at measuring how fast someone can run and how fast they are, and that does draw our attention to them more than perhaps they should. There's some understanding of that, I suppose, but we've still got a way to go. 

[00:21:07] Oh, I need to come back to what you said earlier. And I know anyone listening that knows me at all will think I put you up to that response about focusing on the unsexy part of data, but it is something that I kind of yell from the rooftops to people in sport when I have an opportunity, because it frustrates me on two levels. Firstly, that we don't necessarily always see the opportunities for automation or semi-automation of processes. And then secondly, when we do see the opportunities, we either don't take them, or we don't take advantage of the time saved. And again, evaluation of a player in a team sport accurately and reliably could be done in somewhat semi-automated fashion now, but yet coaches still enjoy spending six hours together in a room on a Monday morning discussing who was good and who wasn't, and then coming to some kind of binary decision anyway, which is generally deferred to the most senior person in the room. 

[00:21:57] So this is where we're at, and this is not, again, a sports-specific problem. It's just something that's inherent all over the place, I'm sure. 

Dr Ellen Broad: [00:22:04] I always say, cause I used to work on a lot of big data infrastructure implementation challenges, and I'd always say the first thing is people are like, "What should we do?", and I'd be like, this is a human problem, it is not a technology problem. The way in which we make use of technology more effectively is we change our culture as humans interacting with that. But it is really hard to get people to understand. Like we still have this idea that you can just drop an expensive solution in the middle of it and then all of the change you want to see will happen around it, and we forget that humans have traditions and expectations and previous understandings of how a job is done. And unless we really help with that, they're always going to be shiny toys that we don't use very well. 

Sam Robertson (Host): [00:22:47] Yeah and I think there's a role that's becoming more popular in the banking sector, I think, but also it's creeping into sport too, which is that role of storyteller. And I don't know if that's the right word, but I think certainly it's almost like a PR advocate for AI in sport is needed. And maybe that's a storyteller or maybe it's someone who just understands it, who can relate well to coaches. And of course there's not a lot of those people out there, in sport anyway. 

[00:23:10] Now I know we're, as often is the case, we're running short on time, but I really wanted to talk to you about limitations. Because I think it's very easy to get carried away with the advantages, but are there limitations or negative effects that you could see playing out in sport, and potentially some things you've seen elsewhere, in the future or even now.

Dr Ellen Broad: [00:23:28] So there's a few things. So I think, a general limitation that we have is that a lot of the time, particularly when we're trying to use AI to chase things that have previously been elusive, that we can put too much trust in the systems that we're using and ignore our own human judgment. So that's where you're like, well, the system is telling me this guy runs the fastest and he is the strongest, so he's going to be the best player - irrespective of how we might have known before using a predictive system that there's more to it than that. So one limitation is how do we ensure that we still maintain some kind of equilibrium between the systems that we use to help us make decisions, but still kind of using our own experience as well?

[00:24:15] Something that I think about a lot for elite athletes, is that probably more than most other people in any other jobs, the amount of data being collected about them is incredible. There is nobody measuring, you know, my body weight and body mass and performance on a daily or weekly basis and how fast I'm moving and looking at my genetic history in order to understand what makes a fantastic athlete. Like that is not happening for most people. And that is not only a huge amount of information to collect about individuals, but something that I think about for athletes is, how does that follow you over time? Like we know, for example, that your injury record is a factor in the kinds of future opportunities you have with different clubs, but as we start to amass richer pictures of athletes, what kinds of decisions could be made about them using information that actually is just not that predictive, but because we have so much of it, we just start relying on it? So you could have an injury record going back three quarters of your career. So moving from, we have really detailed data about you from the last three years, to if you're a young athlete now you could be looking at your whole career is now incredibly quantifiable using information. And how do you allow people to also move on from that? To not be beholden to this digital footprint of the past? So for athletes, I think about that a lot. 

[00:25:51] I remember you and I were having a conversation about unique identifiers for athletes. And we know, for example, from using unique identifiers for researchers, that that changes the way we measure research, it changes how we value certain research outputs. So it has both a positive in that it makes your history as a researcher easier to track, but it also skews how we measure research and the quality of outputs. So I just think about, you know, what is the effect of lasting, more detailed information about humans? I cannot think of other industries that are perhaps as quantifiable for individuals. Astronauts! Astronauts and athletes. 

Sam Robertson (Host): [00:26:33] Yeah they're probably pretty close. They might have athletes covered even. Yeah I think, hearing you speak there, it becomes, as so often is the case, it comes back to where we're starting and what's our understanding of sports performance. Do we have a working model, a mental model, or even an analytical model about when we are doing things like you just mentioned that we're aware of the gaps? So we're aware that we haven't got great ways of tracking their communication, their resilience, their mood, those things that we talked about earlier. But again, even doing that, I think it's very easy still to forget about those things and you do become reliant and I think that's still a danger. 

[00:27:10] Just before I let you go, I just wanted to ask you this. This is such a fast moving area, but big things that AI is grappling with as a whole, what are they? What's one or two big things that they're grappling with now, or they could be soon? And you don't need to talk about how it might impact sport, but certainly our listeners might be able to take that away. But what's the one or two big things you think that it's grappling with now? 

Dr Ellen Broad: [00:27:32] So I think conversations now are moving very explicitly to discussions of power and structure and accountability. My area is kind of AI ethics and responsible AI, and we talk about there being two or three waves that we've been on.

[00:27:47] One being about kind of values-driven, principles-driven approaches to technology design. The second being about improving technical issues with systems. So reducing bias in your model, improving your error rates. And now this third category is kind of, well, how do you think about these systems in context and who has ownership of them and how will they be deployed?

[00:28:10] So I think the next 10 to 25, to probably 50 years, we're going to be talking a lot more about things like organisational accountability for the use of systems. We're going to be talking about professionalisation of the design of those systems. So it's going to move away from how do we make the system as good as it can be, to what are your responsibilities as, say, a team using the systems in high-stakes contexts. So if you're going to make decisions that could affect a player's livelihood, what are your obligations in using the systems? I don't think we're talking about that perhaps explicitly in sport yet, but it's definitely happening in other contexts like law enforcement, facial recognition, this move away from "how do we improve the system?" to "what are your responsibilities as organisations designing and using them?".

Sam Robertson (Host): [00:29:00] Yeah, I mean, there's certainly lots of implications for sport there, I think. And I would agree though that we're probably not quite addressing those in certainly not a wholesale level yet. 

[00:29:09] Dr. Ellen Broad thank you so much for joining me on the show. 

Dr Ellen Broad: [00:29:11] Thank you so much for having me. It was fun.

(Music Interlude) 

Interview Two - Dr Xavi Schelling 

Sam Robertson (Host): [00:29:19] Now for a different perspective on AI, from someone who experiences its influence on a daily basis in the professional sporting environment. Dr. Xavi Schelling is the director of sports science and performance for the NBA franchise the San Antonio spurs, where he has worked since 2014. Prior to that, he worked with Basquet Manresa and the successful Spanish Under 20 national team.

[00:29:42] In addition to his considerable basketball industry experience, Xavi also has a PhD in exercise physiology, along with a couple of master's degrees. He has also published extensively across applied sports science, including more recently on the implementation of decision support systems to the sporting context. Consequently, I'm really interested to hear what he has to say on this particular topic. Xavi, thanks so much for joining me on the show. 

Dr Xavi Schelling: [00:30:06] Hi Sam, how are you? Thank you for having me. 

Sam Robertson (Host): [00:30:07] I'm great and thanks once again for joining us. I'm going to get straight into it. Which areas of sport do you see as most likely to benefit from AI into the future? And are there some that are more likely to be problematic? 

Dr Xavi Schelling: [00:30:20] Yeah. I mean, I think that AI can be implemented in literally any field. I mean, AI is in our phones, AI is in our watches, AI is in cars, in our fridges, it's everywhere. Why should it be different in sports? Obviously, the more data you have, what more hot topic is big data, but the more data you have, the more applicable and more sense it makes to have AI behind it. But also for simpler processes that can be just automated and to improve the efficiency of the process. And that can go from scouting reports, to talent identification, to the performance staff and trying to assess or evaluate performance on court or off court. It's literally endless. It will depend on the data that you have available and how mature your information systems are in your organisation, basically. 

Sam Robertson (Host): [00:31:16] And you mentioned a couple of areas of sporting organisations there that have probably been at the forefront of picking up AI. Do you see any that are lagging behind in your experience across different sporting codes for various reasons? Do you have any take on why they might be lagging behind in their uptake?

Dr Xavi Schelling: [00:31:34] I think that scouting is catching up. I'm saying catching up, and it will depend on the sport, but in basketball and in the NBA in particular, which is my field right now, our scouts are very, very good and our scouting team is very, very big. And we've relied for many, many years - for 20 plus, 30 years - on their experience and their subjective analysis of talent identification. Now we are in the perfect marriage between basketball analytics and advanced statistics and AI, and the human expertise. So I think that scouting was, a few years ago, clearly behind, but they are very quickly catching up because the future of the organisations is on that talent, young talent identification. And very closely, and probably we'll talk about it later on, is the medical side.

[00:32:27] Medical side, I would say that they are trying to catch up, but there's a lot of skepticism and for good reason. The decisions are very, very hard to make, which involves health and injury prevention and the implementation of AI machine learning and information systems is very, very cautious and very, very conservative.

Sam Robertson (Host): [00:32:49] You're right. I probably did want to explore it a little bit later on, but perhaps we can even do that now. You mentioned skepticism by the end user, but I think it's an image problem also, perhaps because it's quite a new area still for many disciplines. And I feel like that image problem is a lot to do with the human versus machine quandary. And in medicine and other very black and white decision-making sometimes I think if the AI system recommends a course of action that turns out to be the wrong one, sometimes I feel like it might be judged more harshly than if the same error was made by a human. Do you think that's part of the problem with medicine? And maybe injury prediction modeling is something else that comes to mind in that area. Does that resonate in what you've experienced? 

Dr Xavi Schelling: [00:33:33] Yeah absolutely. I think that for injury prediction, we can do a whole podcast and a whole interview about it because as you know, it's a very hot topic in our field and we can expand later on, on injury prediction. But yeah, I agree. I think that the reason in medicine is exactly that. Because the skepticism relates to the impact of the decision. So you can implement a very complex and not very successful AI in a decision that is not very important for the organisation, because if you fail in the decision, it won't have a huge impact. Now, when you are deciding on health, an injury, or selecting a high caliber player where you are putting millions of dollars, failing on that decision, it's very, very hard. And there is a risk aversion, which is very natural in humans, that we would rather take that ownership instead of relying on something that we don't understand completely the process. Even though maybe that process is better than our judgment. But it's human nature, I guess. We need a lot of education.

Sam Robertson (Host): [00:34:36] Yeah and I guess it's also much easier to stop using AI than it is to move on a staff member, which is much more confronting, or discipline them even. So it's quite multifaceted probably. I guess, following on from that a little bit, we've talked a little bit about understanding and education, but I think in AI something that's remarked upon a lot by people, particularly who are working directly with it, is the ability of some of these systems and particularly some of these newer algorithms to be very powerful, but also to be black boxes. And I've certainly found in my experience that in order to achieve buy-in from a coach or in fact almost any other stakeholder, they often want to be able to understand and interpret components of a model or algorithm. And I think that's human nature as well, but sometimes it's also because they want to almost compare the way they are arriving at a decision with the way that the algorithm would. But of course, this is a quandary because as we get more and more data, as you mentioned in the introduction there, we're going to have hundreds and thousands of inputs that are going to be going into these algorithms. So we might not have much of a choice. 

[00:35:41] So moving forward towards the future, do you think we are always going to need to know how AI arrives at an answer? Are there situations in sport where it might be okay that we don't know? And I guess vice versa, where could it be an absolute disaster? 

Dr Xavi Schelling: [00:35:54] Yeah, I think the question is more generic, meaning any implementation process requires time. And I would say that when you are implementing AI in an organisation, for people that are not versed on AI at all, I think that you have to first build models and decision support systems that are easier for the end user to understand in order for them to embrace it and accept it. And that's the first step of implementation. Now, when that is implemented and the end user finds useful the DSS, the decision support system that you've developed - their decisions are better, or the decision-making process is faster and more efficient, one or the other or both - then you can sacrifice a little bit more the interpretability of the model, going for more accuracy. But in order to do that, you need a maturity in your organisation first. And again, education. Implementation, slowly, with a lot of explanation of the reasoning behind the model, and then you can then sacrifice a little bit more that interpretability with acceptance from the end user and go for more accurate and more a black box model, if that makes sense.

Sam Robertson (Host): [00:37:06] Yeah, absolutely. So it's a bit of a process to you of education, but also rolling out to complexity, which of course in computer science is a term called 'regularisation', where we're penalising an excessively complex model. And I think you've just provided a good example of how that's useful practically. 

[00:37:23] I want to come back to something you mentioned earlier on, which was efficiency, as a word you talked about with AI. And I think a lot of people when you think of AI, particularly those that aren't working directly in that area, talk a lot about prediction and automation. And obviously automation is quite well linked to efficiency, but I think it's almost something that people think of second relative to prediction, because that's the glossy output, I guess. But on the ladder in terms of improved insights, have you seen anything in sports science or in sports performance -  you've obviously worked in this area for quite a while now - where AI has really started to challenge well-held beliefs or models or theories of sport that are now kind of being proved redundant with so much new data and AI coming in? 

Dr Xavi Schelling: [00:38:10] Yeah, that first thing, or the first topic that comes to mind is injury prediction. I mean, it's something that a lot of vendors and consultants are using to sell their products, is injury prediction. We have beautiful machine learning behind our systems that predict injuries and this is something that lots of organisations have tried internally with very good experts and data scientists, trying to model that. Very important vendors have tried that, but at least in my personal opinion and talking with several colleagues, we haven't found that tool that we can really rely on to actually predict an injury. 

[00:38:49] And the reasons are many. One of them is, do we really have the data, or can we really measure the KPIs that will provide us, or the model, something to predict accurately the injury? Well I don't know, maybe eventually we will with technology, but I don't think that right now we are measuring what really matters. That's one and two is, are we measuring those KPIs, if we really have them, often enough? Or are we inferring those injury risks on game data that we're playing every seven days and we disregard the six days in between? Are we really capturing - everyone is talking about how important sleep is or hydration is - are we capturing sleep for every single player every day and imputing that in the model? I think that we are making a lot of assumptions and making pretty blunt claims that are dangerous for the industry, not just for the industry, but also for AI. When we want to implement AI, if it's being used wrongly, it's going to be harder for us to implement later on.

[00:39:58] It's definitely a use to contextualise the player status that may infer or be related to injury risk, but we have to be very smart in how we do that. We need to know our limitations and we need to keep growing and implementing the right technology, having various smart data systems. And with this I also mean data workflows, not just acquiring a new device that has been proved to be good to assess hamstring strength and another one that it has been proved good to assess sprint on court or speed on court, but we have them in isolation, they don't talk to each other. And then we have a sleep monitor that doesn't talk to the hamstring strength.

[00:40:44] I think that having the right KPIs, having the right data workflow and then having the right model is good. And once you have that first step, making smart decisions with information that that model is giving you and being very careful with it. 

Sam Robertson (Host): [00:41:00] Yeah. My mind's kind of going to a lot of different places hearing you speak there, but I'm thinking a little bit about the importance of organisations working very closely with industry, in particular, in the technology and the startup area. And I'm also thinking about the skill sets of people that are working in organisations. And you mentioned infrastructure and communication, and these are things that can often get glossed over in, I guess, in replacement for someone being brought into an organisation to build a fancy model. 

[00:41:29] I'll talk about the roles and skill sets of people working in sport in a moment, but just on the first point, I wonder, it does seem from your response and I would totally agree with this, that the algorithms are maybe, or the AI itself, is maybe at a stage where it's probably quite sufficient for our use in sport, but it's the inputs coming into these models that need the greatest amount of work. Are sports and universities and all of these other stakeholders in sport working well enough with the startup industry to make sure they're focusing their time and energy on the right areas? I wonder, for example, if I think about how we evaluate a player in any team sport, I think we do a reasonably good job of connecting on the technical elements and probably the physical as well, but we do such a poor job of measuring those human factors of a player's performance, like their communication, their mood, their ability to help a team self-organise even. The way that they are cohesive with a unit on the floor or the pitch or the field. Is that something you'd agree with? 

Dr Xavi Schelling: [00:42:30] I agree 100%. Actually, not long ago, we had an internal meeting and something that we were discussing in an informal discussion about KPIs - specifically on defense but it could be used on offence - and something that we all agreed, and I had this conversation with colleagues in the past, is we found ourselves talking all the time  about how important the intent of the player was to do this or that. Okay, how are you measuring intent? Intent is absolutely impossible to measure. And if intent is something that is very relevant to actually assess how well that player is performing, well we're in trouble, because we are missing a huge KPI. 

[00:43:12] Besides intent, which is a very, very good example of an unmeasurable thing - because we will never know if the player really wanted to do that or was just speculating - the other thing that's interesting is when the player is not doing something. Measuring that reliably is really, really hard. There are players that have a huge influence on defense because they are not doing something and that not doing something from a team perspective, team behavior perspective, maybe you can infer it, but this is another really hard, immeasurable thing. So I would agree with your statement 100% percent and two days ago we were talking about it, I mean, what defines what? 

Sam Robertson (Host): [00:43:55] Yeah, again, I think the more complex a team sport, the harder it is to separate that intent. It does make you wonder about the football codes and basketball codes, that is a whole nother challenge to get to that level.

[00:44:06] I'll come back to my second point now, while I remember, on the staff and the structure of the workforce. And you might talk about this in terms of basketball and your own experience, but also more generally, what does this mean for how we're structuring our sporting organisations? Obviously we're seeing people with data science skillsets and computer science skillsets now cropping up in most sporting organisations, but have we got the balance right? Are they the skillsets that we need and how's it going to look in the future, do you think? 

Dr Xavi Schelling: [00:44:34] Yeah, I think that there are three big blocks that we need to cover for sure. It doesn't mean that you need three people. Maybe you have hybrid roles that are covering more than one, but there are three roles that I think that we need to cover for sure.

[00:44:49] One is the IT, I mean, information technologies. You need someone that is in charge of that hardware and making sure that we have the right technology in place to build our information systems. The other one is data engineers. Sometimes that's a hybrid in between the IT or data engineer, or a data scientist that is a data engineer, but someone needs to make sure that we are connecting the different databases, the different tables, the different departments, and not that each department is living in a silo. Having someone that has the big picture, not just from the decision-making perspective, but also from the technical aspect. What's the best data structure that you want behind your models and behind your AI, is absolutely critical.

[00:45:35] And then when those two fundamental pieces are in place, then you have to have a data scientist. Data scientist slash the new sport scientist, which is a hybrid between data scientist and a specific field in sport performance, which maybe is psychology, maybe it's strength and conditioning, maybe it's physiology, whatever it is, but it's clearly focused and his toolkit has a lot of strengths in data management and data analysis. But those three blocks, I think that not just in the future organisations, today you need to be sound and you need to be very good at data workflow and data engineering, IT, and then data science. 

Sam Robertson (Host): [00:46:18] And again, if you can find someone with all those skillsets, they may well be a unicorn, but I guess that makes them even more valuable.

[00:46:26] So just wanted to, before I let you go, just one more question relating again to the future of AI, I suppose. We kind of talked about this in our opening, but which of those big areas are getting the benefit from sport. But I think as we've spoken, we've talked about AI being more than just algorithms and more than just systems and much more about the data inputs as well. As I think of it now, I'm not a geneticist, but I feel like the genetics field is probably an area that's going to benefit from AI as we uncover more insights from there in the future. And the other area that comes to mind is probably unstructured data from vision. And particularly with markerless 3D motion capture now starting to become a reality, I suppose. Can you think of other areas or you would agree that they're probably two of the ones that are most ripe over the next 10 or 20 years to really benefit from AI? 

Dr Xavi Schelling: [00:47:14] I think that yeah, you are right on the money. I think computer vision, which with a lot of different branches underneath computer vision, including imaging and a good example is how to read an MRI or how to interpret a painting from the emotional side, which there is some research about it. But analyzing images, there is a huge field there. Also another branch in computer science is all the tracking systems and markerless tracking  systems in sport. That's huge. And there is a lot of data, 25 data points per second. With, in our case, ten players on court, a lot of information to analyse right there. 

[00:47:53] The other thing that I think that will benefit a lot are two more branches. I think that, as I was saying before, talent identification and roster management are fields that are currently being benefited from AI because it allows you to simulate, which is another very important word that AI will allow us to do, is simulating your future team without actually making the trade. Meaning, if I cluster this type of player that I'm looking for, which is a type A player, I want to find this type A player and put it in my team and what's the effect on my output? Those types of simulations are endless and that's where machine learning and AI is very good at classifying, clustering, and then simulating outputs and playing endlessly with that. And when you tie the team performance, the player performance, with the salary cap and the budget, then it's a very powerful tool where you are simulating performance of the team related to money that you have to invest. That's a huge field where teams are playing with AI already. And I'm sure that they will grow this because at the end of the day, they are managing the budget of the teams. 

[00:49:10] And the other aspect that I think, the third aspect that I think that we will benefit from, is not just from medicine, not just from the imaging side, I think that there is a lot on medicine slash performance where biomechanics and understanding the complexities of human movement, AI will be for sure, is already, being very, very useful. But once we understand how complex systems work and we use smartly AI to deconstruct that complexity, skill acquisition and injury prevention can be fields that will be hugely benefited from AI. 100%. 

Sam Robertson (Host): [00:49:47] I absolutely agree. And hopefully the current students studying sports science and sports engineering, etc, are all having a good look at, well a good listen to that, I suppose. Because these are the hard skills it'll be really important to develop. 

[00:50:02] On that note, thank you Xavi for joining us on the show and sharing some of your insightful perspectives on AI in sport. 

Dr Xavi Schelling: [00:50:09] Thank you, Sam. I'm very excited to be a part of this project because I think it's a great idea and it's the future of our field, if not the present right now.

(Music Interlude) 

Final Thoughts 

Sam Robertson (Host): [00:50:27] And now some final thoughts from me on today's question. It seems that the inner workings of AI will, for the short term at least, largely remain a mystery to many of us. But this hidden and mysterious nature of AI is actually inherent. It is, after all, the covert implementation of AI that actually helps to make our lives so much easier. If we were noticing it all of the time, chances are it wouldn't be doing its job properly. 

[00:50:51] Having said this, definitions of AI need to be and can be made clearer. The many benefits of doing this were mentioned all throughout this episode. If we can do that, we're able to start demystifying AI and talking more specifically about what we actually mean. Processing, databasing, machine learning, automation. And of course doing this also allows us to evaluate both its impact and shortcomings more accurately. 

[00:51:15] It also seems that sport's generally cautious approach to date, to the widespread implementation of AI, is well-placed. We're right to hold AI to a much higher level of scrutiny than we do of humans. It's ability to impact at scale is far greater, and we have the ability to design it how we want, rather than let it conform us. But of course, it's also this scalability that makes AI so powerful and attractive. Thankfully, as we've heard, there are people and teams working to ensure its appropriate governance.

[00:51:44] There's no doubt that complex simulations and real time predictions are on the near horizon for many areas in sport. But it's perhaps the more mundane aspects which have the ability to save us time and make our work easier. In sport, however, these benefits still remain largely unrealised. As almost anyone working in sport would no doubt agree though, that is a form of help that would be very much appreciated. 

[00:52:08] I'm Sam Robinson, and this has been One Track Mind. Join us next episode, where we'll be asking: Will your job exists in the future? 

Outro

Lara Chan-Baker: [00:52:17] One Track Mind is brought to you by Track and Victoria University. Our host is Professor Sam Robertson and our producer is Lara Chan-Baker. That's me!

[00:52:28] If you care about these issues as much as we do, please support us by subscribing, leaving a review on iTunes, and recommending the show to a friend. It only takes a minute, but it makes all the difference. 

[00:52:40] If you want more where this came from, follow us on Twitter @trackvu, on Instagram @track.vu, or just head to trackvu.com. While  you're there, why not sign up for our newsletter? It's a regular dose of sports science insights from our leading team of researchers, with links to further reading on each episode topic. 

[00:53:00] Thank you so much for listening to One Track Mind. We will see you soon.


Previous
Previous

Episode 2: Will Your Job Exist In The Future?

Next
Next

What Hunter-Gatherers Can Teach Us About Innovation In Sport