“Our Great Barrier Reef is the size of Italy. We don’t have enough people to really go out there and dive and do the work that needs to be done to help protect it.”
–Sue Keay
About Sue Keay
Dr Sue Keay is Director of UNSW AI Institute and Founder and Chair of Robotics Australia Group, the peak body for the robotics industry in the country. Sue is a fellow of the Australian Academy of Technology and Engineering and serves on numerous advisory boards. She was featured on the 2025 H20 AI 100 list, and the Cosmos list of Remarkable and Inspirational Women in Australian Science.
What you will learn
- How AI and robotics can address complex environmental challenges, such as preserving the Great Barrier Reef
- The importance of open-minded leadership and organizational experimentation in AI transformation
- Strategies for implementing effective AI governance and leveraging diverse expertise within organizations
- Balancing cognitive augmentation and cognitive offloading with AI tools in education and work
- The evolving impact of AI and robotics on future job roles, emphasizing augmentation rather than full replacement
- Risks and opportunities associated with relying on external AI models, highlighting the case for sovereign AI
- The significance of investing in public AI infrastructure and retaining AI talent for national competitiveness
- Approaches to fostering a vibrant domestic AI ecosystem, including talent attraction, infrastructure, and unique local advantages
Episode Resources
Transcript
Ross Dawson: So it is wonderful to have you on the show.
Sue Keay: Yeah, thanks very much for having me, Ross.
Ross Dawson: So you’ve been doing so much and getting some wonderful accolades for your work, and I think that’s with this positive framing. So at a high level, how can AI best augment humanity? Or what are the things we can imagine?
Sue Keay: Well, you know, one of the best examples that I often share with people is around how AI could be applied to solve environmental challenges. I think the key aspects of AI that people are only just really starting to grasp are not only the velocity with which AI is happening and starting to have an impact on the world at the moment, but also the scale.
I really look at this more from the perspective of robotics, where AI is having a physically active role in the environment. Where I see the big opportunities are in solving problems that humans to date have been unable to solve on our own. When I was in Queensland, one of the research groups I worked with had developed an underwater vision-guided robot that could do a number of things and was looking at how it could play a role in helping to preserve our Great Barrier Reef.
Our Great Barrier Reef is the size of Italy. We don’t have enough people to really go out there and dive and do the work that needs to be done to help protect it. There are a number of threats to the Great Barrier Reef, such as the proliferation of crown-of-thorns starfish that are literally eating all of the reef. At the moment, we try and control their numbers using human divers, but that’s actually inherently unsafe, and we can only do it in areas where tourists go, so the rest of the reef is laid to ruin.
But also, as ocean temperatures rise, coral is currently spawning in temperatures that are not conducive to coral growth. The robot was developed so that it could collect coral spawn and essentially move it further south into ocean temperatures that are more conducive to coral growth. To my mind, if we could find a commercial rationale to invest, then we could have a whole bunch of these robots working as a swarm, helping to collect coral spawn and rejuvenate the coral reef, encouraging coral growth a bit further south in conditions that are conducive.
It’s just something we can’t tackle on our own. To me, being able to solve some of these challenges—like climate change, where we’re desperately needing solutions to problems and as a species, we haven’t done a great job of solving them on our own today.
Ross Dawson: That’s a fantastic example. Obviously, environmental challenges and the broad things are described as wicked problems, as in, there is no ready solution. So there’s a cognitive aspect to the sense of, how can we not find the solution, but be able to find pathways to work out what are the ways in which we can address impact, or move against climate change? That’s a really wonderful example of where you’re actually putting that into practice, manifesting that with robotics.
Sue Keay: Yeah, that’s right. It’s just, what’s the commercial imperative? There are a lot of challenges that we can imagine solving, but at the end of the day, someone does have to invest in making it happen.
Ross Dawson: So one of the other things, which is, I suppose, not quite as wicked a problem as climate change, but is organizational transformation. The world is changing faster than organizations are. I suppose a lot of leaders suddenly say, oh, we’ve got AI, how do we put this into practice? You do a lot with leaders and communicating and engaging with them. How do you help leaders to understand the ways in which they can transform organizations in an AI world?
Sue Keay: Yeah, well, there’s no simple answer to that question, is there? But I think the most important thing that is becoming increasingly clear is that leaders have to have an open mindset. No transformation works if the organization doesn’t have leadership that sends clear messaging that experimenting with artificial intelligence, and that the use of artificial intelligence within the business, is a priority and act accordingly.
I think that’s the biggest role that leaders can play, as well as modeling the sort of behavior that they’re expecting from their employees. In many cases, that just means experimenting with AI on a personal level. But it’s very hard to do that if you can’t engage with having an open mindset.
Because I think it’s a very challenging time—people are having to make decisions at a very rapid pace, and it makes people feel very uncomfortable. But at the end of the day, that’s the leader’s responsibility: to guide organizations through these tumultuous times, encouraging and empowering people at the individual level to do what they can to understand how artificial intelligence is going to impact the business.
So I think leadership is vital, but also making room for people from all parts of the business to be able to play a role and bring their imagination to the table in terms of how artificial intelligence can be applied. As I said, I don’t think anyone’s got all of the answers. The people who understand the domain best are the people working in the business. So giving them the tools and understanding about AI and how it might be used in the business is critical if you want to survive the AI transformation that we’re all living through at the moment.
Ross Dawson: Thriving overload. I talk about openness to experience being what enables our ability to synthesize things, make sense of the world, and take action. So that’s one of the questions: how do we then make ourselves more open to experience or ideas? In what you’ve said, and also more generally in your communication, you talk about experimentation being a fundamental piece for leaders and throughout organizations. But that needs to be balanced with some sort of governance, in the sense of saying, well, what experiments go too far? Or how do you build the learning loops from experiments? So if a leader says, all right, we are going to experiment and learn and get ideas to come up from all parts of the organization and see what works, how can that be best structured?
Sue Keay: Yeah, I think it does open the door for some new styles of governance. Increasingly, we’re seeing companies reach out—if they don’t have internal AI expertise—to bring AI expertise in, in the form of external advisory roles. I think it is also a real opportunity for reverse mentoring in many cases, where some of the answers might actually lie with more junior members of the staff who wouldn’t typically get a seat at the table in some of the decision-making roles.
Being able to find effective ways that those people, particularly if they have knowledge about artificial intelligence, can play a more productive leadership role is important. So really, it’s about harnessing whatever resources are at your disposal, whether they actually be within the organization or external to the organization, to help make things happen.
Ross Dawson: So essentially being more AI aware and AI capable to help design some new governance as well as drive the experimentation.
Sue Keay: Well, I think at the end of the day, what it involves is having a good, long hard look at where the organization is at today, and making that assessment of how well positioned the organization is for all of these rapid changes that are occurring. Where there are deficits, putting things in place to help fill those gaps and to make sure that staff feel supported through the process.
But I think one of the things—because, in essence, this is just a huge change management process—that is really vital is ensuring that people feel that they have a voice in the future. Just to give you an example from where I work, that also includes being flexible enough to accept when people do not want to engage with this transformation.
If, for example, you have students who don’t want to use AI tools, or you have staff who don’t want to use AI tools, then thinking about what that means for the business. Not necessarily looking to change people’s minds, but looking at what are the ways that they can continue to contribute, but don’t feel put in a position where they have no choice.
Ross Dawson: That’s a very interesting observation. I think it’s very important. Obviously, I don’t think it was your decision, but UNSW is one of the universities which has led in terms of providing AI LLMs to students and faculty. I’d love to hear any reflections from what you’ve seen in that experiment.
Sue Keay: Well, all the licenses haven’t been rolled out yet, but there was an experiment, and there was a significant uptake. So there was definitely a lot of appetite to try these AI tools, but there was also a lot of pushback, and that’s just going to be an ongoing process.
At the end of the day, people need to feel that they have some autonomy about the way these decisions impact on their lives, and if they choose not to use AI tools, then that should be an option.
Ross Dawson: Which takes us to the very rigorous discussion now around cognitive offloading versus cognitive augmentation, where LLMs make you dumber is sort of one of the general memes out there. It’s possible that it can be, and I think how we use these tools is really fundamental. In a higher education institution, that’s a particularly salient point.
Sue Keay: Yeah, well, sadly, what it means is that failure rates increase, and that hopefully will just be a temporary blip. People will discover that if they are not getting the marks that previous years’ students have received, then they maybe need to review how they were using these tools, and whether they are helping or hindering the learning process.
Sadly, I think that will now become part of the study process where people will experiment. Maybe they’ll use these AI tools to help them with tutorials and assignments, but they will also need to make sure that they are spending time on activities that will ensure that they would be able to pass exams and get the marks that they’re hoping to get as part of their degrees.
So it is a different situation to any other that students currently face, and it’s happening across all levels. Arguably, it’s also happening in the workplace, where people might find that, isolated from their AI tools, maybe they’re not able to produce the level of work that would normally be expected.
This is all Brave New World territory and frontiers that we haven’t crossed before. But there are some balancing mechanisms. In the case of universities, when it comes to assigning grades, if people have done that cognitive offloading onto their AI tools but are then tested on their knowledge in the absence of those tools, then that’s a really good indicator of how much people are learning.
Ross Dawson: Yeah, and I think the path of least resistance is often what humans tend to take. But certainly when you’re a university student, you have the responsibility to do what it is which will develop your learning, rather than submit things which are mainly AI.
Sue Keay: Well, there are consequences to doing that, cognitive offloading.
Ross Dawson: So this takes us to work. Many people are very negative on the future of work and saying, oh yeah, AI will be able to do everything. Amongst other things, we have a lot of choice around how we go about it. So just to start, how should we be approaching AI in the workforce in order to help drive future job prosperity?
Sue Keay: Well, first I’d like to say that I probably have a slightly different outlook on that premise, because of having more of a focus on robotics. If you do anything in the physical world, then I would argue that it is probably going to be a long time before AI would be replacing a lot of what you do. Most jobs involve more than sitting behind a computer—they involve interacting with people, and in many cases, doing physical tasks. We are not at the point where physical AI is anywhere near as capable as any human being.
So I think there are a lot of things that are unlikely to be replaced in the near future, in terms of the tasks that humans undertake. More importantly, as things evolve, we might find that there are additional tasks that we can take on that we’ve been unable to do in the past.
I’ll give you one example of that. There is currently a lot of work happening in agricultural robotics, looking at how we can reduce the amount of pesticide use by very precise spraying of weeds in fields. If you use a robot to do that task, then you can significantly reduce the amount of pesticide. It also means that the farmer can be doing a whole bunch of other work, rather than sitting on a tractor pouring pesticide over their fields.
But importantly, it’s not a replacement for all of the other tasks that need to happen on the farm. The robot is actually just doing something that it is particularly good at. I think there’s going to be a whole range of things where we discover that these are very useful additions, as opposed to replacements, to what human workers need to be doing.
The analogy, though, is that in times past where there was more human labor available at a cheaper price, the task of picking weeds out of a field might have fallen to a dozen people. Yet now, typically in Australia, most farming has to be done by a single farmer, augmented by a whole bunch of very large equipment to help manage the farm sizes that we have in Australia.
In some respects, you could look at it as almost going back to the days when labor was more plentiful, and you could be using these physical versions of AI to do those very fine tasks that we no longer have the people to be able to do, if that makes sense.
Ross Dawson: Yeah, absolutely. I’m certainly far more positive than most on the potential for a positive future of work. Broadly, your point is there is so much that we can be freed up to do, so much that can be usefully done by people in physical work and in cognitive work. I think it’s a bit of a basic idea—oh, yeah, AI will free us up to do more things—but it’s true. We just have to imagine what it is we could be doing, and how we can use our time and capabilities effectively, because I think there’s so much more demand for us to apply ourselves well, and that just comes back to the mindset.
Sue Keay: Yes, exactly. Our demographics are not in our favor, in that we have an aging population. The only way we can bring in a supply of younger people is through immigration. We’re going to have a lot of labor challenges. As people say, the jobs might not be the same as the current jobs that we have, and certainly, if you have a role that is very much only computer-facing, then there might very well be some aspects of your work that an AI is able to do.
But then it’s really looking at where the value within a business is generated, and focusing efforts on that. What is very challenging for many businesses with this AI transformation is the discovery that perhaps they’re not as aware of all of the processes currently happening within the business, and in particular, where value is being generated.
Being able to do that deep dive of understanding what current business practices are and where value is being generated is really critical at the moment. Also, because of the threat of AI allowing competitors to do whatever it is that you do, but better, it really does put a lot of pressure on businesses to understand what their competitive advantage is and look at how they can best protect that.
Everyone should be aware that some of these AI tools come with risks. The more that you embrace some of these tools, if you don’t have a good understanding of where your data is stored and how your data is being used—even if it’s not your data, if it is the processes that you are allowing another company’s software to get an insight into—you might unintentionally be giving away what is actually the core value proposition of your business.
Ross Dawson: Yeah, one of the things, is AI helps us, not least by looking at understanding what it is we actually do now, and where we can apply AI. So you are a leading voice, possibly the leading voice, in supporting sovereign AI in Australia. Perhaps taking a step back, I can use Australia as much as an example, but I think just for our international audience as well: what is the case for sovereign AI? What is sovereign AI, and what is the case for being able to build it?
Sue Keay: Well, I think unless you’re the US or China, then you are probably reliant, at the moment, on models from the US or China. There is nothing wrong with using those models and AI tools that are built from them, but you also have to understand the risks that are inherent in giving responsibility, often for very vital business processes, to software and tools that your country doesn’t have any control or ownership over.
There are many critical industries where it probably makes a lot more sense for AI tools to be developed internally, particularly where critical data sets are concerned, and regulated industries where data has to be kept in-country. It makes a lot of sense to be able to develop your own AI models and your own AI tools. While they may not have the functionality of some of the existing frontier models, I think there is yet a lot of opportunity—and definitely unmet opportunity—for developing a lot of our sovereign data sets and looking at ways that we can create value for an economy based on that data, which we definitely don’t want to be opening up to other countries to benefit from.
It’s data that is owned by a nation’s people, that has been invested in through taxes on people and companies in that country. I think that’s the key argument for why every country should look at how it can develop some of its own AI models and have some degree of sovereignty. That means having some ownership and control over AI, particularly for critical industries and for these national data sets, which really are, in essence, national treasures.
I think we’re starting to see it in some of the court cases that are coming up around copyright. For a long time, AI has benefited from the lack of protections and, in some cases, lack of understanding of the value of data.
I’ll give you another example of this, again from the physical AI realm. In agriculture, for getting information for agronomists, it’s often very common for people to fly drones. It’s often more convenient for a farmer to get a consultant to do that work, because then they don’t have to worry about, in Australia, the CASA regulations about flying the drones. They don’t need to worry about the software to analyze all of the data coming in from the drone and make decisions about what it means for whether you’re wanting to plant or whether you might have some issues in one particular field.
You can offload all of that responsibility onto the operator, but in many cases, the software that these drones operate are using will then ingest all of the data that is collected from your farm and then use it to improve their own software and models. In some respects, that sounds like a good thing—it makes it better the next time they fly the drone and run that software, they can give you better answers—but it is actually the farmer’s data.
At the moment, in many industries, the people who own the software are taking control of data purely because people don’t appreciate that they can push back and say, actually, no, that data is mine, and I have the right to say whether you can use it or not.
An analogy on a more personal level is how many of us are using social media platforms that are ingesting a whole bunch of data about us and never give us any financial return for the use of all of that information. Indeed, now they are fairly actively using that information to influence us in ways that are in the commercial interests of the people providing the service. In essence, they’re creating captive markets.
The value of individual and business data has not really been realized in many industries, and that’s something that has to change.
Ross Dawson: Yeah, that sounds pretty compelling to me. I was on a panel a while ago, a few months ago, on should Australia build or buy AI? The point I made is that it’s not all or nothing. There are layers: you’ve got your data, as you pointed out, you have your data centers and compute infrastructure, you’ve got your foundation models, you have some AI infrastructure above that, and then the application layers. You could slice it up a number of ways.
All this takes investment. So what are the choices we have? Where should we be focusing in terms of the investment required across those layers? How do we get that capital? There are some external people outside Australia offering to do things, but that in turn leads to a lack of ownership. So how should we be going about this?
Sue Keay: Yeah, well, it seems we have no shortage of capital. If you’re a business who wants to be able to run AI models, then there is significant investment that is currently planned or slated for developing data centers that would allow you to do a lot of inference in the architecture supplied by those data centers.
But where we’re not seeing investment is in the development of our supercomputing facilities to have more GPUs that would allow the development of AI models. The commercial case for building data centers is really predicated mainly on the use being inference rather than AI training. Most businesses are only interested in inference, and so that’s fine—there’s plenty of investment in that area—but for AI researchers and for some companies, being able to have access to GPU clusters that are capable of training very large data sets and building these foundation models relies on you having investment in getting the most up-to-date and a magnitude to form a cluster of GPUs.
In the example of Australia, we have not upgraded our supercomputing facilities since 2018. We do not have an AI strategy that really clearly outlines what we are hoping to see in the future. When you look at many other countries—the UK, Norway, Canada—as well as seeing significant investments in private infrastructure, for example OpenAI Stargate in both the UK and Norway, it is also balanced by significant public investment.
I think where we’re missing an opportunity at the moment in Australia is in that public investment in AI infrastructure. Some people might characterize this as just a problem for the boffins at university, but in reality, what it means is that we will start to lose the AI talent that we have in Australia, because we’re not giving them opportunities that are comparable with the opportunities in other countries.
At the moment, where that opportunity is, is on developing these AI models. Even for the ability for a nation to be able to undergo this AI transformation, you do rely on having AI specialists who understand how these models are developed and understand the risks and also the opportunities, and where it makes sense to build national models.
We know that Australia is starting to lose its AI talent. UNSW has the largest engineering faculty in Australia, so I would describe it as an engine room that is producing a lot of our AI talent, but our ability to hang on to it at the moment is pretty slim. When we’re trying to recruit people, one of the key questions that they have is, how many GPUs do I have access to? At the moment, there’s not a great answer to that question in Australia.
I think it’s very hard to undergo an AI transformation of an entire economy and encourage businesses to be adopting artificial intelligence if we lose all of the people who understand how that artificial intelligence is being built.
Ross Dawson: You know, in Silicon Valley, classically the engine, the recruitment line for the leading engineers is, this is how many H100s we’ve got. That’s because that means they can do their work to the greatest effect.
Exactly. So perhaps you can put this in an Australian context, but maybe things which happen more broadly. I guess this point around this talent feedback loop: talent wants access to the compute to enable them to do their research, but also to other talent. There is very much a positive feedback loop—if there are lots of other wonderful, talented people there, that’s where I can learn and develop. So there are negative and virtuous cycle feedback loops there.
But just more broadly—perhaps you can frame it as Australia, but this might be advice that is taken around the world—what is your call to action? What is it that we can or should be doing?
Sue Keay: I think that countries that are investing not just in infrastructure, but importantly in the people who are able to use that infrastructure to support them, to develop AI models, to develop data pipelines—that is an area that I think is very productive if you have the opportunity to invest. That’s a really good way to ensure that you can maintain a balance of attracting and retaining talent, because there will continue to be a lot of pressure on AI talent.
You did ask for a global perspective, but I will add that one of the opportunities that Australia has is, obviously, many people consider that we have an enviable lifestyle—nice climate, in many cases work that’s reasonably close to a beautiful beach, lovely natural environment. These are selling points that, if we were also able to show that we were giving people opportunities to develop their careers in AI, many people would like to take up, even if it means sacrificing potentially much higher salaries in other countries.
So it really is about assessing what are the attractions that your particular economy has for AI talent, and then making decisions accordingly to help make sure that you can be an attractive destination.
Ross Dawson: Which goes to—just recently, it occurred to me that we should be building an AI Center of Excellence in Bondi Junction, which can be very close to the city, also very close to the beach, tapping the extraordinary beauty and possibilities of the region. So I’m going to be putting out the call to any large organizations that may think that’s a good idea.
Sue Keay: Oh yeah, I’ll work there. Ross, count me in.
Ross Dawson: So where can people find out more about your work? Your multi-dimensional work.
Sue Keay: Sure, so UNSW AI Institute. You can find us through the UNSW homepage—unsw.edu.au. The UNSW AI Institute is a pan-university institute, which means that it’s not just about the engineers and computer scientists who are developing AI models and algorithms—although we love them—it also encompasses AI research in all of its various forms.
We do a lot in health and medicine. We also have a lot of legal scholars who are expert at having a look at the legal frameworks and implications for various laws of AI, as well as social scientists looking at the implications of AI being developed here and deployed in Australia, and the business opportunities, of course.
Ross Dawson: Fantastic. Thank you so much for all of your work and advocacy. If wonderful things happen at AI in Australia, that will be significantly due to you.
Sue Keay: Oh, thanks, Ross. Well, fingers crossed.
Podcast: Play in new window | Download






