Skip to main content

Product Management Webinar: High-stake Product Decisions

Making an Impact: Building product when the stakes are high with Randeep Sidhu

Are you ready to uncover the strategies behind making critical product decisions when the stakes are at their highest? Want to discover the secret to critical decision-making and how to build under pressure? 

Join us for a webinar like no other with special guest, Randeep Sidhu, AI leader, healthcare specialist, and founder of UK’s technology Covid defense, the NHS C19 app, and host Janna Bastow as they explore the art of making critical product decisions in high-stakes environments.

About Randeep Sidhu

Randeep is an AI leader that builds disruptive technology in Healthcare, Regulated Industries and Startups. He focuses on consumer apps, AI and digital transformation. He is listed as a Yahoo Outstanding LGBT+ Executive for being a Global champion of inclusion. He was also the director at Babylon Health where he worked in building healthcare across the UK, Canada, and USA and created the Emerging Markets division where he built health tech in Rwanda.

In July 2020, Randeep’s expertise meant he was asked by the UK government to build and run the UK’s technology Covid defense, the NHS C19 app, which he built in just 6weeks, while pioneering specific research to include the needs of LGBTQ+, refugees and PoC communities. Here, he also advised UK Covid groups (Covid-O, SAGE, Spi-B, and Spi-M), as well as multiple government bodies and multinationals like Google and Apple.

He has previously supported and advised The Alan Turing Institute – the UK’s National Institute for AI – in removing AI bias across healthcare and policing.

Key Takeaways

  • The decision-making frameworks used by successful product leaders
  • How to make well-informed decisions even in the midst of uncertainty
  • How to align your team’s efforts with the greater purpose
  • The tools and techniques needed to build products with high-stakes
  • And so much more!

Our guest also shared his experience navigating the high pressures of building the NHS C19 app for the UK Government in an extraordinary 6 weeks, which was proven to have saved over 10,000 lives and halved the number of COVID cases in hospitals.

Dots watching a webinar

[00:00:00] Janna Bastow: Hello, everybody. Welcome to the Product Expert Fireside Series here that we run at ProdPad.

As you many of you might know, this is a series of webinars that we run. We’ve been running them monthly for quite some time now. We’ve had a series of past talks and firesides like this that have been recorded in the past. So you can go back into the history and see those conversations. And it’s always with amazing experts that we bring on board to share their insights and their learnings.

Before we jump in and I introduce our guest, Randeep I would like to just tell you a little bit about us here at ProdPad. So ProdPad is a tool that we built when we were product managers ourselves. Myself and Simon, we were product managers and we needed tools to do our own jobs and tools like ProdPad didn’t exist.

So we started hacking away. We needed something to keep track of the experiments and feedback that we had these big piles of backlogs. And so we built ProdPad and it gave us control and organization and transparency for the rest of the org as well into the product management space and the decisions that were being made in there.

It gave us a single source of truth for what was going on in the product space. And now it’s being used by thousands of teams around the world. We do have a free trial, but as well as a a sandbox environment that is loaded with example data example roadmaps and OKRs and experiments and feedback and stuff like that.

So you can see how it all fits together. And our team is made up of a lot of people. So we’d love to hear your feedback. Jump in there and start playing around. Let us know what you think. And I do have a mini announcement and then ask for help from all of you as well. So, we are launching something new on Product Hunt.

We only do a Product Hunt launch every couple of years or so, and we’ve got something big out there. But on Product Hunt I’m going to be following up with you all with an email on Monday, but we’d love to get your upvotes. For any of you who know Product Hunt, you know how important those upvotes and those reviews are as well.

So you go on there and give us a five star review. If you love your ProdPad, get in there, give us a little thumbs up as well. And that gives us a boost and helps us spread the word about it. But the thing in particular that we are boosting, the thing that we’re launching is our AI features. So we have been building some really neat stuff over the course of the last few months, and it involves using GPT power stuff that allows you to remove the grunt work as a product manager.

So you can generate ideas and details, you don’t have that blank page that to start from all the time. You can generate user stories and brainstorm key results because we all know that product managers hate having to write user stories from scratch. You can also use it to get feedback on stuff we didn’t want to just generate stuff as product managers. We want it to be a smart sidekick for you. So you can say to it here’s my vision. Is this vision any good? Or this idea that I’ve got is this idea aligned with my vision and we’ll give you smart feedback that you can use right that moment to improve your. Your ideas or your vision.

And there’s also a new one. This one’s brand new. Let’s almost call it alpha. If you’ve got the Slack integration with ProdPad, you’ve already got this working, so anybody who’s got this, it’s secretly launched out and behind there. It’s a Slack bot for product advice. You can talk to the ProdPad bot that’s installed and ask it for product advice, but it’s also been clued into what’s got going on in your ProdPad account.

So you can ask it things like. Could you summarize last week’s feedback and tell me if it’s aligned with our goals? Could you tell me, what’s going on our roadmap right now, or what’s moving fast or what’s stuck in our roadmap? Ask it some questions and it acts like your feed, your sidekick and gives you the information directly in Slack.

So give that a try. We’d love your feedback on how that’s working for you and what kind of questions you’re asking. So, all that is about us, but today is not about us. I’d like to introduce you to our guest for the day. Big thank you to Randeep Sidhu for making time to come into chat. I know Randeep through the Mind the Product world.

I think we met at the speaker dinner a couple of years ago before you got up and blew our minds about the talk that you gave around how you got pulled in to roped in by the government to build the COVID 19 app with very little resource and very little turnaround to just make it happen.

And you were at Babylon at the time. You were a you were holding a pivotal role there. And that COVID 19 app has been credited with saving 10, 000 lives. So, Randeep has been a key player in tech startups. He’s an expert in AI and healthcare technology. He’s also advised major companies like Google and Apple, as well as the Alan Turing Institute on removing AI bias.

So really excited to have him here. We’re going to be talking about actually a wide range of different things. We were talking earlier about how to wrap this up under one thing. We said let’s talk about how to drive impact as a product manager and, what to do when the stakes are high in product management, because that is really the the space where Randy has been operating as a product person.

So Randy. Big welcome. Everyone say welcome to Randeep. Thank you so much for joining us today. 

[00:05:12] Randeep Sidhu: Thank you. I’m happy to be. 

[00:05:13] Janna Bastow: Absolutely. And so, could you introduce yourself and, give us a little bit of some insight into your journey into the world of AI and healthcare. 

[00:05:21] Randeep Sidhu: Most people worry because they think, Oh, I’m not an expert AI. I’m not a product expert. And they look at someone like me and goes, Oh of course it’s destined for you to do this. And then I’m like, absolute rubbish, because I’ve got a very weird background. So I’d studied, and I mentioned this just because it helps me to understand that so many people haven’t had a destined role for this world.

I did evolutionary biology at university. Then pivoted to basically work for a charity called Teach First. So I spent the first two years of my career as a frontline teacher in the roughest schools in London, and then helping that charity growth, hack and grow. That was 20 years ago. They’re incredibly successful now, and they recruit thousands and thousands of people into teaching.

And it’s a very successful charity, but I did my career the wrong way around. Cause I did the charitable work first, then got thrown out into the world of business And I chose to work in kind of behavioral advertising, initially in AOL and places like that, early, early stage. But I had this itch because I did my career the other way around.

I had a chance to have impact early on and then I thought my commercial job isn’t giving me that impact. So then I joined the board of a museum. So I did some charitable board work in my spare time that helped got me kind of meeting new people, thinking about myself differently. And then through that, I got into consulting strategy, consulting, specifically innovation, product brand, that kind of.

Got a chance to come up with product ideas and business ideas and test these kind of, a lot of the stuff that we do in the discovery phase, I did as a business, I did for this giant consultancy. And so I did that for a number of years and then just got a bit tired with doing the ideation, but never the delivery.

So that’s the difference where product comes in because product isn’t just consulting, it’s actually delivery. So I was like, I feel like I want to have this idea and not mess up the delivery like most people did. So I was like, okay. I’ll join a business. My first business was in gaming. Amazing. Mobile gaming.

You cut your teeth in the most extreme high pressured environment because actually people are making money, but no one’s living or dying. That’s reality. It’s just a game. And this game had billions of downloads. It was a mobile game. But, it was fun. And at the same time, I’d managed to start on the board of HIV charity.

That was a kind of my next Pokemon evolution to do from board work to doing the stuff I care about. And that mirrored what I managed to do in my career. So I went from, this Product and brand role into working in healthcare. So my first startup was building something an AI chatbot that helped tell you what could be wrong with you and give you information, maybe even triage you potentially triage being help you understand what the next best action is.

If you’ve got an arm that hurts, do you take a paracetamol, see a doctor, go to a, hospital? What’s the next action? Did that for a while, then joined Babylon early on, maybe employee 200 as a director, product director of AI, was there for a number of years. And ended up building our Rwandan healthcare practice out and making that sustainable.

Then COVID hit. So I joined the NHS. I can talk more about that later, and then I finished the role of building healthcare in Nigeria and Rwanda. So I took my personal passion and managed to apply it in my day job, which is building healthcare for those who need it most. 

[00:08:30] Janna Bastow: Yeah, absolutely.

Could you tell us, could you zoom in on that point that you ended up getting pulled into working on the the COVID stuff? 

[00:08:38] Randeep Sidhu: Yeah, so, I’ve spoken about this before, so some people who may have heard it before, please, forgive me, but essentially I was working in Babylon towards the end of 2019 I was building, Initially AI at Babylon, and then I pivoted because I realized that the AI that was being built wasn’t necessarily practical.

It was very theoretical. So I thought, let me apply this AI in a real life situation and let’s apply it in a very challenging situation, e. g. Rwanda, because if we can make the AI kind of consultation engine work somewhere like Rwanda, it’s going to be cheaper and more efficient. It worked. We signed a 10 year deal with the government at the end of 2019.

Happy days. Then COVID hit, but at that point, I’d been given all the apps at Babylon around globally, just to deliver. There’s other teams building components and I was delivering it to the end consumer, the UK public app, which is the largest GP, the private app, Rwandan services and things in Canada.

So I was building that. And for the first three months of 2020, I was restructuring the company, creating a tribe, a super tribe with a hundred odd people in it, then COVID hit, so I was doing some COVID work and out of the blue and maybe June, 2020. get a phone call. And it’s probably through a connection through my charitable work at this HIV charity and maybe someone spotting Babylon and going, here’s a guy, we should speak to him.

And it was locked down. So I took the phone call and it was a guy from government saying, yeah, the NHS COVID app that failed. And they’re looking to speak to people who can advise them of what they should do differently next time. So I took the phone call, had a chat, it was like a Friday night. And then the person I spoke to in government Do you want to speak to someone else?

I think I can’t answer your questions. No, fine. Saturday I spoke to someone and I kept speaking. I think I spoke to six or seven different people, maybe five, five or six. And every time I spoke to someone, I was asking questions about how are you going to consider the needs of X? How are you going to make sure it’s safe?

Maybe Z, Y just challenging them on their kind of presumptions and assumptions. And at the end of it, I thought, Oh my God, people are going to die because this team doesn’t fully answer these questions. And I focus a lot on equity, poor people, black people, brown people with health inequalities.

These are the people that were dying with COVID, my community. And the people are speaking to didn’t have that lived experience and didn’t have that understanding. So I put the phone down, spoke to my partner and there’s some serious problems going on. They agreed 2 days later, they called me back.

So this was like a Tuesday night and said. The department of health then wants you to join. I said, when, like tomorrow? I’ve got a job. I can’t just leave. So I managed to orchestrate with my job to basically leave in three days. So I spoke to him on Wednesday morning, left on Friday evening, started in government on Saturday.

And that was my next year. And purely because they realized that they didn’t have anyone internally. You could really answer the questions that needed to be answered, or even could think in that way, which I thought was a fairly standard product way. But yeah, that was I got parachuted in less than a week.

Up ended my whole life and then spent a year on the front line. 

[00:11:34] Janna Bastow: That’s incredible. And you made an incredible impact with what you worked on as well. What were some of the most challenging aspects when you jumped in there? 

[00:11:45] Randeep Sidhu: So I didn’t realize at the time, but I walked in and there was no product team.

It was just me. We had an agency, so we had a dev agency and we had lots of other divisions of government, but I just was like, but why have you not got a product person? And then I realized. Subsequent like six months in on the month, I think on the Friday that they called me a product person had started, they resigned on Tuesday.

So three days later, a junior, some middle product manager left because they couldn’t, they just thought it was too chaotic. So I ended up having to grow a team and I didn’t really feel like I had anyone watching my back or watching my six because, when you’ve got a product team, there’s other people there.

And I did feel a little bit isolated because a lot of people who are great and good and smart were there. But no one I’d known before, no one I’d worked with before. So it was felt like huge imposter syndrome because cycle and also the responsibility. So I was the director in government and director general on paper, super senior.

And I thought, if this fails, it’s on my head. And there’s some people advise me not to take the job because if the first app failed, why is this one going to work? You can’t build tech and government. And so I felt personally, massive professional risk and also personal risk. I was like, communities I know are really impacted by this.

If I build this and it fails. It’s on my head. If I build this in the wrong way and it somehow goes sideways, that’s my name to stir. So it was just huge personal and professional pressure. And I didn’t have any security around me to know that it was going to work. So that was the biggest. The biggest challenge for me trying to navigate.

It’s super extreme. No one’s ever going to face this kind of challenge where it’s genuinely life or death, there is no support. And I think government and lots of people who might’ve had the experience pulled back because they thought the first app failed. They don’t want their name to be associated with a second failure.

So it gave me some space, but it also meant a lot of people who maybe could have helped or supported weren’t around and I wouldn’t even know that they weren’t around because they just weren’t there. So that was a sort of a, an interesting and unique challenge, but probably one that’s shared by a lot of people who work in product because we’re often under resourced and by ourselves and the only people in the room who maybe have that specific skillset.

Yeah, absolutely. 

[00:14:01] Janna Bastow: So who were you able to turn to for help the most in that sort of situation? 

[00:14:05] Randeep Sidhu: So it’s really important if you are, you have allies everywhere, you just don’t necessarily know it. And so there was lots of smart people and lots of passionate people, lots of people, because it was very cause driven.

Everyone could see why we were doing this. It was like wartime kind of footing. So I looked across the people I had and civil servants and some, there was Deloitte, who was on the ground, if I believe, forget Deloitte or Accenture. So there’s some consultants there who are bright and smart. And I basically got them to grow me a product team.

So I ended up having a product team of about 10. And then we used designers and other people from other agencies by just getting kind of them to find someone. I was like, great, I’ve got one person, find someone like you, find someone else like you. But we basically had to build an app in six weeks. That was the timeline from the day I landed.

So the day we had an app was basically six or seven weeks. So it was definitely building the car as you were driving it. But I think. Also trying to help because there’s lots of policy people because government runs by policy. It’s very weird It’s like they were very confused by having a product person running the shots So we built a system like a conveyor belt that had eight Groups and it literally drew an arrow and went here’s the groups and now every morning in our stand up We had a kind of whole team stand up with all the directors and I was a step one what we build I am what we build the next was building it, which was the engineering, then approving it, then regulating it, which was the kind of, so and then the policy teams, we just built a conveyor belt so they can just choose to do something.

It started with me and then went through the conveyor belt. So that kind of visibility help, but also training the individual teams to understand what product was so saying, this is what I’m trying to do. Here’s how you can help me. And if you need to speak to me and challenge me, here’s the route you can do, which is most effective.

And I think agreeing the rules of engagement, but there’s a lot more, much less crosstalk and noise. That was definitely something that helped. 

[00:15:55] Janna Bastow: Yeah, absolutely. So have they stuck with this sort of way of working or has the pressure been taken off in some way? 

[00:16:01] Randeep Sidhu: I can, it was a one time thing, which worked incredibly effectively.

Now, I spot there’s some GDS people who government digital service people on the call and they’re the team inside government that are responsible for doing agile and digital delivery. I don’t know how it works normally. I know that. As I left, the team started changing and becoming much more typical government construction, and I think the pace of delivery slowed down, although COVID itself had slowed down a bit, so maybe that was why.

It’s a shame, but I don’t think it continued. I think we operate with a very much a kind of startup mentality. But yeah, maybe that doesn’t always work in a large institution. 

[00:16:46] Janna Bastow: Yeah. Okay. That checks. It feels like you did like a mini transformation of the processes for that point in time in order to get something out.

Because honestly, six weeks to get a product out, even a minor product is tough work, let alone something that needs to be shipped globally. 

[00:17:01] Randeep Sidhu: So what we did, we had six weeks of building the first it, and it had to be safe. And then we tested it and trial it for six to eight weeks in the borough that had a lot of challenge.

So end to end is probably longer, but we had to release something that was safe and worked, which we did, and then we kept adding refinements and languages and stuff that was cherries, but there was definitely, yeah, there was definitely something there. 

[00:17:22] Janna Bastow: Yeah. So how do you define impact in the context of healthcare and and AI.

[00:17:29] Randeep Sidhu: How it impacts a hard word, because it all depends on what you matter. So can I’m gay, I’m Brown. I grew up poor, not anymore necessarily, Londoner. First of my family’s go to university. There’s lots of different kind of intersections that I have that I try and represent for my community.

And everyone in school will have different kinds of things that they’re passionate about. But the problem with product is product. People are, we’re inherently always the most reasonable people in the room, typically. And if you’re not a reasonable product person, you’re probably in the wrong profession.

Doesn’t mean you can’t push for your agenda. But you’re trying to do what’s for the collective best good. That’s everything we do. Point estimation. How many story points are we doing? Why do the X or Y? We’re always trying to optimize for the biggest bang, the biggest impact. Which means we’re geared towards impact.

The challenge is when those impacts clash. So, an example, there was a part of the app which had an animation, and the animation was a heartbeat that showed that the app was working. It was built specifically because I knew people who didn’t have familiarity with technology would not know if the app was working.

It was just a static, dumb thing. So we built it, and lots of older people, lots of… who hadn’t got familiarity with technology, immigrants, people like my mum. It’s not, is it working? I was like, is it moving with a green heartbeat? Yes, then it’s working. And if it switched off for any reason or disabled, it went red and stopped beating.

Very easy. Red good, red bad, green good. I could see the benefit for lots of minority communities with that happening. But then we had a group who I think it was, oh my God, what’s it called? People who have seizures on certain lights. There’s certain people who have effect have challenges with movement and animation.

So they were saying this, for this community, it’s bad. It can’t do this. It has to be static. And it’s like, how do you balance the needs of the many with the needs of the they’re both minorities. So you just have to take a judgment. And that’s basically all of healthcare. The benefit is that product type decision, you might go optimize X user or Y user, usually doesn’t have an impact. It’s annoying because you might make less revenue. In healthcare, there’s a direct impact to people. So you have to take a lens of, do I focus on this minority group who are really underprivileged?

Or assisting a little bit, this majority or slightly less minority group. So we kept the animation and had an ability, a fail state that if the user disabled animations on their phone for safety reasons, it would be disabled in the app, but only on that condition. But there’s that kind of collective good concept, which you always have to think about.

And that’s really important, not just in product, but I took a lot of learning from healthcare modeling and how healthcare deals with these kinds of problems. 

[00:20:11] Janna Bastow: That makes a lot of sense. Yeah. And actually a clue from the audience here and I mentioned that it might be photosensitive epilepsy.

[00:20:19] Randeep Sidhu: Oh, yeah, I think it was something about epilepsy . Yeah. 

[00:20:23] Janna Bastow: And this is one of the the issues when you’re working in healthcare is that you do have to build for everyone. When you’re working in government, you have to build for everyone. You can’t build something for a specific group and say, ah, this is for, people who use MacBooks and, have an income of x and likely, can, are, we can cut out most people. You say, hey, what happens when we have to consider pretty much everyone in the country? 

[00:20:50] Randeep Sidhu: Look, you’re American, if I’m correct, or are you Canadian? Canadian. Yeah. Okay. I’m going to blame you for being Canadian.

[00:20:56] Janna Bastow: I’ll forgive it. I’ve been away long enough. I don’t sound Canadian ish. 

[00:21:00] Randeep Sidhu: So it’s really interesting. Healthcare is so polarised. Yeah. The NHS is amazing. And bang for buck, if you compare us to America, we spend half the GDP, the gross domestic product that America does on healthcare. So in terms of every, I don’t know the numbers, but let’s say every American spends 10, 10 percent of their income on health care.

We spend 5 percent through the NHS and taxation. That’s whatever the numbers are, but we spend in the UK half. But we have universal coverage. America doesn’t. So when you look at the kind of the differences between the UK and America in health care, it’s a really good example of how to make this product decisioning in, you’ve got a problem.

So you’ve got 100 story points and you’ve got to allocate them. You do it, some people win, some people lose. And there’s a matrix. And if anybody wants more information on this, there’s an NHS risk matrix, which we can use, and I can talk about later, which we use. In America, they have that limited budget, but what ends up happening is, if you’ve got a little bit more money, you just gain more story points.

You can actually get more healthcare. So if I give you an example, the NHS maybe has, I don’t know, 50 billion, 100 billion budget. They have to allocate that for the biggest impact. So sometimes someone comes up saying, I need this cancer treatment. It’s got a 10 percent effectiveness rate, but it costs 200 grand.

The NHS said no. I’m going to die. The NHS is withdrawing healthcare to me. In America, very often on the far right, that’s advertised as, we have socialist healthcare, we have death squads in the UK. That’s how it’s described often on the far right in America. The UK has death squads. And that death squad, is essentially a product decision group.

Not product, but for health care. We’ve got, X amount of money. Where do we spend it? And looking at that happening in government made me realize that’s how we do product decisioning. And we have those same resource limitations. There’s a small group we can invest a lot of energy in from benefit that could be huge for them, but we could do a bigger impact collectively by spending the money elsewhere.

So it was really interesting thinking about American health care and how unequal it is and thinking about actually as much as some people lose. Generally, more people win with thinking about it in a kind of more collectivist way. But obviously, if you’re that person who’s missing out, it can feel quite hard.

It can be quite challenging. And so we used I presented it as my other product, this Maslow’s Hierarchy. I was like, how do I work out how to have the biggest impact? I just literally built a funnel. Most people would know a funnel, like a sales funnel, where you lose people at every step of the funnel.

So we need to make the funnel as wide as possible at the beginning and keep as many people in the funnel as we can, but I couldn’t describe it like a sales funnel because it’s healthcare. So I did it as Maslow’s hierarchy in a triangle. And I put at the bottom, who can physically get this app as in, it’s on the right oS the next level. Who is it dangerous to have the app, as in there might be people who this app could be dangerous or unsafe for. So when you have that thought experiment, you could actually pick out groups who would find something like this dangerous immigrants, LGBT community, domestic, abuse survivors.

So we built protections for those groups and a protection that worked for the LGBT group. Which was deleting some of your history work for the work for everyone. So it was funny. All these groups had one problem. We created a solution to that problem and then just kept moving up. And that kind of solution helped have the biggest impact for the most people, but gave us a way of understanding and prioritizing bugs.

So instead of everything getting thrown up in the air, you fix a bottom before you fix the next level. So that was a way that we acted impact in our kind of product frameworks, instead of just talking about it and becoming very theoretical. 

[00:24:53] Janna Bastow: Yeah, that sounds like a really useful use of stacking that funnel up and understanding the impact on the users from, the widest set through to the the smallest subsets.

Can you share anything about any surprising or unexpected impacts you’ve seen when working on healthcare initiatives? 

[00:25:15] Randeep Sidhu: So, there’s an interesting thing that happens with so I mentioned Babylon and I was working there and I worked in Rwanda. So Babylon, when I joined, they had this opportunity to work in Rwanda and I don’t think it was as fully robust as it could be.

It was a nice initiative, which helped some people, but I questioned whether it was as long term and sustainable, or maybe it was a nice story. That they could say that they weren’t evil. So I saw an opportunity to actually make it sustainable, but also deliver some good value. So that’s when I said I was director of AI sexiest job at Babylon product.

And I was like, I’m going to lead this and I’m going to grow the Rwandan and emerging markets business. People thought I was a bit crazy, but I said, I’m going to roll out this AI and it’s going to work. It’s not going to work. I’m going to try and make it work, but it’s going to be effective lean. It’s going to do what it needs to do to be impacted.

If it has impact in Rwanda, it’ll have a massive impact in the UK for pennies, not pounds. So we were rolling out kind of something, some form of AI assistance and long story short, the idea was we were hoping to give it to nurses and doctors. The reason is most healthcare is so understaffed in Rwanda that.

Most people will see a nurse, not a doctor, and that nurse will have a different level of training compared to, say, the UK or the West. So often nurses are dealing with challenges that they’ve not been trained for, and sometimes no one, someone doesn’t ever see a doctor. They’ll see a junior nurse and the smaller the area, the more junior the nurses.

So I thought, if I can, and the Babylon system we built was a series of questions. It would say, what do you have a fever? Do you have pain in your foot? Yes. No. Do you have. Lights in your eyes, yes, no. So you give it your symptoms and it keeps asking questions to refine. I thought wouldn’t it be interesting if we could give this to a nurse or a doctor so they could be upskilled and quickly get through a patient to the end.

But I’m not just going to throw it into the field, we’ve got to test it. Testing is really important for healthcare because you can’t just, move fast and break things because you’ll move fast and kill things if you do that. So I thought what we’re going to do is, a typical situation in Rwanda would be someone would turn up in a hospital, in a health center.

And there’s this really strange kind of user entrant. They’ve got these benches, like seven or eight benches, long benches. And it’s like an open hut, essentially. And the first person turns up and sits in seat one. And then every person who turns up just sits. So it’s just a row. And you, by the time that surgery opens at 9am, there’s 70 or 80 people plus just queued up.

And then they get seen one by one. There might be someone dying. There might be someone really sick. But fine, if that’s your system. So in my head, I was like, can we get them to answer some questions before they turn up so we could actually triage them or get the most urgent cases to the front.

So what I said was you’re going to spend the whole day sitting here. So what I’m going to do is. I’m going to have a nurse with an iPad who’s got this AI system, and this nurse is going to, if anybody wants to, take you out of the queue, I’ll go through the AI system, the nurse will grade it, or the doctor will grade it, as they’re asking the questions, the doctor will say, is that a good question?

The patient will say, was that a good question? Did I feel good? And then, We just get the patient, we’d have, we paid for a doctor to sit there. We just go, okay, great. Thank you. Now you can see an old doctor. Thank you for your help. So as a kind of upswing for them helping us just to test the system, we’d get them to see a doctor and they’d get to the front of the queue and get to see a doctor.

So we added capacity to the system just to test it. The only reason I had a nurse there with an iPad was just because I didn’t want it to be, I wanted to be clinically validated. So we had thousands of these done. At the end, the nurses gave us some feedback. The doctors gave us some feedback. We were like great.

We can improve it. It generally worked. And I was like, great, this concept is going to work, but then something weird happened. The doctor that we paid just to sit there and take the patients, I wasn’t getting feedback from them. They were just, we’ve got a room, put a doctor in, we were just using that as a kind of a carrot.

That doctor came and spoke to me and said, who are these patients? What do you mean? They’re more confident. They’re more articulate. They know what’s going wrong with them. They’re able to get help faster. This interaction was like two, three times faster than it usually takes. And the patient was able to understand when I was speaking to them, and long term, it turned out they ended up following more of the advice.

I was like, what? And there’s an interesting cultural thing that happened.

In part of Iranian culture, it’s being less confident in front of authority. It’s quite an authoritarian culture. So when you see a doctor often, you’re very kowtowed. You’re often not necessarily as vocal about what’s wrong with you. What was interesting is… The patients basically engaging with this kind of robot AI got to really interrogate what was or wasn’t wrong with them and ask lots of questions.

So by the end of that interaction, they felt much more confident with talking about what was wrong with them. So when they saw a doctor, they said, this is what’s wrong with me. And they had more confidence in doing that. So a lot of the kind of ambiguity, uncertainty was removed. So a completely unintended side effect of this was.

Patients got better care, patients felt more confident, and a weird side effect was we started getting people deliberately opting to go through the AI. Because they somehow felt that the robot was more skilled than the doctor. Which was a little bit dangerous to have a whole bunch of people trust less in a doctor than in an AI, and that was another kind of unintended consequence which we rolled back.

But it wasn’t just something that was saving time for doctors and creating efficiency. Thank It genuinely helped the patients. So we looked at ways of rolling that system, not just in a doctor for a doctor to do the questions, but before someone even got to a doctor. So we had these kind of login pads where someone scanned their ID card, could answer some questions in local language if needed and then be put into a couple of cues depending on how serious they were. So that was something which I was not expecting at all. But the kind of the social impact of AI is something a lot of people don’t think about because we consider AI and we consider these interactions so differently. 

[00:31:46] Janna Bastow: It sounds like it really empowered them to take their health into their own hands in a certain way.

[00:31:52] Randeep Sidhu: And it’s something which, this is obviously, everyone comes with bias. I just hadn’t considered it, and it was really interesting to see it all play out. So I don’t know, there’s some unintended bad side effects I’m sure, that was a good one. 

[00:32:05] Janna Bastow: Yeah. And speaking of bias in AI, how do you stop AI from hallucinating?

Where was it getting these answers really? 

[00:32:13] Randeep Sidhu: Our system was not artificial general intelligence or any kind of, it was a very limited set of things it could ask you. So it wasn’t like chat GPT was going to hallucinate. Okay. It could hallucinate with conditions you might have. It could think that you had West Nile fever, which was wrong.

And there was an interesting thing that happened where There’s two questions, which I found hilarious. One, it was like, so if a woman came to us and was said, I’m feeling sick, I’m throwing up. One of the questions we might ask is, have you had sex? If you answered yes, it was like, Oh, you’re pregnant. We didn’t ask who you had sex with.

So the chances are, yeah, WebMD someone spotted. And that was like, just just LGBT bias. So we corrected that. But then there was another side effect where it was asking questions and then it would ask, sometimes it would occasionally ask a woman if they had pain in their testicles because they forgot to put testicles as a male only.

And I was like, Oh, so, that’s the health systems of different kinds of hallucinations. And I think the danger is there’s a really weird thing about empathy. So I was at a panel earlier. I was speaking to a bunch of AI people at lunch and someone said the word they’re talking about their self driving car.

They’re like, Oh, I’ve got a Tesla. It’s amazing. The reason people complain about a Tesla is it all kind of breaks by itself is because they don’t have empathy for the system. When they know how the system works, they can work around it and realize when they get the flavor of its personality.

And I was like, that’s a terrifying statement. Computers don’t have personality. Computers don’t have empathy. We can code it on them. And the example of Rwanda was interesting because people started feeling comfortable with this AI thinking it was smarter, more intelligent, and more kind of rigorous and authoritative than a human.

They attributed it really human qualities. And there’s something there that’s quite dangerous about giving those attributes to kind of robots and computers, because you then put it into a kind of weird uncanny valley space where you’re, you have to be doing this, you think it’s real, and then it fails in a catastrophic way, but your barriers are down because it’s presenting itself in a really compelling human effective way.

[00:34:35] Janna Bastow: Absolutely, I’ve got one for you. I heard that somebody’s using ChatGPT to to power foraging guides, and somebody’s don’t do that, you’re going to kill someone! Mushroom foraging guides and stuff like that. 

[00:34:48] Randeep Sidhu: Didn’t someone die from making a mushroom? 

[00:34:49] Janna Bastow: Oh, there you go, it’s happened now. Great. 

[00:34:53] Randeep Sidhu: Yeah, did a catch EPP guide. Yeah, they did a… But anyway, look, what are your thoughts about AI, Janna? Come on, look, I’m not sitting here talking and the audience is… Ah. I’ve given you some thoughts. What are your thoughts? 

[00:35:06] Janna Bastow: I think I went through the entire, wow, this is absolutely incredible.

I can see how this is going to take all our jobs. And then about a week later, I was like, Oh, hold on. I can see under the veil here. This is absolutely not going to, it’s actually. I’m not replicating what it is that it wants us to see but I can see power in it, reducing grunt work and helping to make connections and that sort of thing, right?

But it’s this tool in our toolkit that we have now, that as long as we understand what it’s capable of and don’t try to make it more capable than it is, then Then we get to reap its benefits. But I like what you said, you were talking about this case study of this AI doctor in Rwanda about how, when it had this unintended consequence, you rolled it back.

You’re like, ah, wait, we didn’t want it to do that. That’s not a good thing. We’re going to take that back out. So it doesn’t lead down this path of people. Over trusting this thing and therefore not actually trusting doctors because we have to trust the humans they actually have had the training and they’re going to spot things that the AI hasn’t because the AI is only trained on certain things.

And reading up on how a eyes tend to over time, basically self perpetuate, so they learn something, and then they train themselves on that same thing, and it’s just perpetuates the same thing over and over again, they don’t learn outside that module.

[00:36:29] Randeep Sidhu: And they don’t actually call that in evolution, they call that genetic drift or inbreeding, because essentially a whole cohort of genetics just keeps mating with itself. And then it goes on. Yeah. 

[00:36:40] Janna Bastow: I think I heard it called a model collapse in AI world. I didn’t quite go down that that rabbit hole as much as I probably could have done to speak as an authority on it.

But yeah, I remember reading something about how, these AI models, it’s like actually, ultimately they could throw these good as what we’ve trained them on. And the only thing we’re putting back into them is the same thing that we put into them in the first place. So 

[00:37:03] Randeep Sidhu: I think a lot of people are aware of that and interested in that space.

And I think for me, you’re right. Artificial general intelligence isn’t the thing that’s going to happen tomorrow in the next couple of years. Like this I watched the latest mission impossible movie and I had to watch it a second time. Cause I watched it at the premier with my other half.

He works in film. So we’ve got two kids. And he was like, it’s amazing. I was like, it is nonsense. It’s absolute ridiculous nonsense. And there’s a scene where there’s like the road, the AI is on a video screen and it’s like a giant eye pulsing. It’s that’s the video screen. It’s not a camera. The AI can’t see you through the video screen.

It’s not going to be offended when something goes wrong. So then I had to watch it a second time. Taking out my head, take everything I know about computers out of my head and going, Oh, it’s a really fun action movie. So there’s definitely that kind of public spirit of it being like the Terminator. But what it is currently is actually somewhat more dangerous.

Because when you build AI systems, and I’ve worked, and I’m sure most people in this call have worked in, so ignore it’s AI, just let’s imagine it’s a app or a service. Do you ever build the perfect app or service? No. MVP means you build an MVP, and you build on it, and you build on it, and you build on it.

And actually what ends up happening is you build your MVP, someone senior is great, shift, let’s go to the next thing. And you end up just building a series of MVPs. You never really refine it enough to be perfect. So when you build an MVP, what do you do? You ignore all of extraneous chaos. You keep it simple.

You build for the middle of the bell shaped curve. You build for the middle. Problem is, that middle is straight, white, male, middle aged, rich. 

[00:38:34] Janna Bastow: Yeah, probably, yeah, probably looks like you, who’s, you’re building for the people who own MacBooks already. 

[00:38:40] Randeep Sidhu: Even if it’s not deliberately biased, it’s, you’re building for where the market is and who pays money, and who pays money the most is that.

You might go, we’re going to build for our customer base is that. So you build a little model, and you’re like, great, so I’ve shipped this. Let’s use an healthcare example. Let’s say I’ve got a, when you dial up a kind of call center, it’s please tell me your problem. I’m like, I have, I need to cancel my internet.

Let’s imagine I need to speak to a doctor. First stage language recognition. You build it on standard middle. Anyone who’s got English as a second language, who’s never used a robot phone before, or yeah, all these different problems, bad quality phone, because it’s an Android or cheap, like it doesn’t quite work for them.

So it works for 98 percent of people or 95 percent of people. Maybe 90 percent of people, but the 10 percent it doesn’t work for are probably the most vulnerable. So you knock them out at that stage. Then it goes to describe your health problem. The words and vocabulary you use is wrong or different.

It doesn’t, cause it’s middle. So every stage, and then you go and speak to a doctor or do some sort of AI triage, like I was talking about. And then maybe you don’t follow, answer the questions properly. And it keeps going and it perpetuates until you. So the problem is when you have all these systems.

They’re all just middle, and no one goes back to refine them. And that’s my worry for AI. It’s at every stage of that funnel, you knock people out. And all the people that are left are probably the most lucrative. But the ones who are most in need are the ones that have been shaved off the edges, and they’re the ones that are left with no recourse to go back to stuff.

So I remember seeing something about them stopping ticket machines in the London Underground, not having, people paying for tickets. There’s a whole bunch of people who don’t have credit cards, and they don’t, or maybe they can’t, and there’s actually genuinely some people who actually still can’t read.

You’re like, what are they going to do? They need to speak to someone with cash. So this is populations that we don’t ever encounter, but are there, and they’re the people who get stuck. So that’s where I think some of the kind of, the AGI and everything being wonderful kind of hits the rodents.

Actually, what’s the impact of one of our things? What’s it when you tie chat GPT into X into Y, and then it makes a decision. All these, j daisy chaining is where I find it’s a little bit more sinister, because you don’t know who’s being knocked out in the chain. 

[00:40:54] Janna Bastow: Yeah. So what can we do in our teams to make sure that sort of thing doesn’t happen or what sort of questions can we be asking to check those sort of things?

[00:41:03] Randeep Sidhu: I think honestly, just being aware of it. It’s just being, I’m not going to say I’m a practitioner. I’ve been in the room. No one’s going to build the perfect app, the perfect system. We never get that opportunity. Yeah. It’s being conscious of, let’s look at the unhappy path through this. Don’t always just look at the happy path and who is excluded in that unhappy path.

And back to that kind of equity inclusion thing, you’re like, okay, 5 percent of users without a phone are going to be knocked out here. What can I do with this? You’re like actually there might be another service for it. For me, it might be like you have to go into your doctor yourself, but just consider it and you’re not going to fix it.

Don’t kill yourself trying to fix it. Just document it and know it. Because when you leave, the person who inherits this service. They won’t have any of that background. They won’t know that you built this decision tool for middle, and then you won’t, they won’t know the compromises you made to get it shipped.

And so they won’t realize that, Oh, it doesn’t work. If you’ve got an accent, it doesn’t work. If you’re trans, it doesn’t work. If X, so if you document it, they can always keep it front of mind. That’s the thing I’d practically consider us always doing, thinking about who gets knocked out, thinking about the unhappy path and sometimes being tough.

So at test and trace, we translated the app into 12 languages. And what’s interesting is the government didn’t typically translate itself into anything other than Welsh, because that’s the legal language. Local councils would translate into other languages, but typically central NHS wouldn’t. And we picked those languages to be most, cover most of the population.

So we had four South Asian languages, Punjabi, Gujarati, Bengali, Urdu, but not Hindi. So Hindi is… The kind of second or most common language in India, but it’s a bit like everyone has a mother tongue and then speaks Hindi or everyone has a mother tongue and then speaks English. So most people in the UK who could speak Hindi would likely speak one of these other languages.

So we didn’t put Hindi in, but we did put Romanian in, even though it’s a tiny fraction of the population, the Romanian population in the UK suffers huge health inequality, often the people who work in the kind of. car washes, manual labor, meatpacking, and Romanian don’t often speak English and are quite, often quite challenged and taken advantage of.

So we put in a language that had a tiny population that would have had a high impact. So we had actually had the Indian High Commission write to us and complained that Hindi wasn’t a language. And I wrote back very strongly that the reason is anyone in this country who needs to speak any of these languages, show me stats proving that this is going to be helping people versus just being a political stunt.

So that’s the kind of thing that sometimes you can be firm, but you have to think about those consequences. 

[00:43:52] Janna Bastow: And that’s where having data backing up why you’re making these decisions really helps as well. Yes. And David actually in the chat had a an interesting point about AI about one of its limitations, about how it lacks the human ability to break off from the script and adapt to new situations.

He read of a case where a self driving car blocked a street and blocked somebody from getting emergency care. 

[00:44:16] Randeep Sidhu: That’s always, the thing is, it’s back to, it’s back to when robots fail, they fail, and they fail in an inhuman way. And if you’re a human, if you’re a human doctor asking Jana, so do your testicles hurt?

You, it wouldn’t just be funny. It’d be, I’d be such a failure as a doctor. I’m irrelevant. It’s so, it fails so badly that it’s completely gone. And that’s like a car that just blocks. You’re like, that’s so fundamental. How could you do that? And it breaks the system. So sometimes we humans are built for rule breaking detection.

So the thing that we pick out are these exceptions that are really breaking the rules. But actually I think sometimes the other stuff is more sinister. The stuff that goes unnoticed where. In a self driving car example, if you thought this car had empathy and you just trusted it, there’s an example in London.

There was a guy who was shot dead on the underground, John Charles Amenazes. It was in Stockholm near me. And he was some kind of immigrant who’s been chased by the police and jumped and they shot him dead. And there’s lots of news coverage about it. Now, why did he get killed? Because they were looking for someone, and the people on the call, the kind of anti terrorist, whatever, transport police, were told, there’s a guy who looks like this, who we suspect to be X.

So they were given… Forewarning that this person was a risk and that bias in terms of how they picked him, it was a human bias. Now, what’s interesting is if you look at facial recognition, risk scoring, all that kind of stuff, half my face is a beard and the other half is brown. So there’s definitely challenges of identifying me compared to other people.

I have, I get stopped all the time when I’m going through transport. Let’s say I was running for a tube and there was a algorithm that either didn’t pick up my face properly or misattributed me for other reasons. And said I was a high risk for compute. It tells you a high risk. We’re back in the Rwandan situation.

This amazing AI is telling you that this person’s a risk. What do humans do when they get told that it’s a risk they act. So the challenge, even if there’s a human in the middle, there’s that human trust that AI more than they should do because it’s influencing their perception. Me twitching, getting something out of my bag.

It looks like I’m getting some wires. It happened after the London bombing, I was pulling some cables out of my bag to put some headphones in, three people left in my train carriage. Oh, Jesus. And I was like, Oh, fine. It happens, but it was interesting going, but if that was something that was institutionalized, something that actually I couldn’t avoid, there’s a genuine risk of these systems giving indications to people that could then be acted on.

So humans still act on it, but it’s the kind of surreptitious nature of how AI works. 

[00:47:03] Janna Bastow: Yeah, if the system were programmed and there were something bigger set to that, as that as that story tells. So, maybe on a more positive note, we’ll end are there any upcoming trends in AI or healthcare that are exciting?

[00:47:18] Randeep Sidhu: I think Gen AI is very exciting, but not for the reasons most people probably think. I don’t care that it can write an email. What I care about is the fact that it can… It can riff on a, it’s a bit like jazz. I give it a thing and it can multiply that thing in lots of different varieties. So let’s say we both we’re both the same age.

We both have pre diabetes or I think it’s type 2 diabetes, which is, or we’re on the risk of getting diabetes. And a doctor says to us both, eat less sugar. Okay, I’ve got a predominantly South Asian diet. What does that mean? You’re a female and you have a kind of predominantly vegan diet.

What does that mean? Knowing that, it could actually customize diets for both of us or customize information for both of us. So a doctor’s not sitting there going, Oh my God what diet am I going to eat? Oh, okay, Janna, have this amount of food, do this. For me, my mom, she’s don’t eat roti, eat this, don’t eat this.

It could be customized around me and my lifestyle and maybe be a bit more adaptive to that. That’s the kind of thing in a limited way that could have huge impact. because it’s mass personalization, but not completely randomly, but in really specific pockets, weight loss, diet, going to the gym, certain exercises.

There’s domains that can have a huge impact on us. But we’re thinking about, self driving cars, whereas that’s the thing that will really help our economy and our people. That’s the thing that I’m most excited about. Applying that kind of thinking to general health care at the front line.

[00:48:49] Janna Bastow: Yeah, absolutely. And there’s a question that came in at the last minute here from David. He said, can you comment at all on explanatory AI, the AI that’s designed to cite its reasons for arriving at a conclusion? 

[00:49:03] Randeep Sidhu: So, I’m less familiar with this, but I would say, Most systems being built with large datasets are inherently dark.

And even if they try and explain it, the reality is, even if it’s a kind of a closed box system, you can prod it in a certain way and extract what it might be doing by giving it example case studies. And so you can throw information and see what it’s bits out and work out a little bit what it’s doing.

I’m absolutely open to something like that. But once again, there’s a lot of false reassurance. If it explains it, if you go, answer is Janice, a criminal. Okay. Explain why Janice a criminal before I, so I don’t release her from prison. Janice is a criminal because she’s Canadian and Canadians have 20 percent higher chance of re offending, she lives in this area, she does this and she did a drug crime.

The justification doesn’t mean that the bias isn’t there. It’s just explained itself better. So you’re more likely to believe it. So the challenge there is just because you know how the sausage was made, doesn’t mean you want to eat it. That’s the challenge. So it can give false reassurance because it gives a really smart sounding justification.

But if someone who’s oh, hang on a minute, Canadians aren’t more prone to crime, just because she’s a woman, she’s not more prone to crime. They can challenge it, but if you didn’t have that insight, you might. 

[00:50:20] Janna Bastow: Yeah, and that’s a great point. I agree with David there. Great point. I also, I often find that with chat GPT, it’ll be wrong on something.

And as soon as you say, are you sure about that? It’ll go, oh yeah, I apologize for that. And it’ll just completely retract what it says and say something completely different. It gave me one the other day where I was prodding it on something, and I was like, no, I told you this up here, and it wasn’t listening.

And it said, which answer do you prefer? And it gave me the choice of, do you prefer the answer of, and it gave one answer that suggested that it did know what I was talking about, and sure, go ahead. And the other answer was, I don’t know what you’re talking about. And I was like, wait, so you don’t know?

You’re just trying to tell me what I want to hear. And when I said that go with the answer where you do know what I’m talking about. When I asked the next question, it did not know what I was talking about. I was like, nevermind, look, you are just feeding me the usual. 

[00:51:10] Randeep Sidhu: I’m going to, tell you something, which probably I shouldn’t share publicly or even record it.

I basically opted out of chat GPT because I don’t think it’s quite good enough. I’m waiting for chat GPT, like four is good. Three is three and a half but four is much better. I’m going to be like, when it gets to five, then I’m going to engage properly. Cause I think some of the crap you have to deal with now, it’s just going to get removed and it’s just going to work better.

So I’m like doubling down until wait until it gets better and then engage fully because I don’t want to be part of the training day. I don’t want to be a beater. 

[00:51:36] Janna Bastow: Oh, that’s a good point. All right. Hey, let’s watch this space and see what comes out for RAD GPT 5 and beyond. And let’s see what happens with the AI space.

Thank you so much for taking the time to share your insights and your experience with us today. This has been a really fascinating conversation. 

[00:51:53] Randeep Sidhu: I’ve rabbited on too long. I’m apologies. And most people probably got bored. But yes, thank you for your patience and listening. 

[00:51:59] Janna Bastow: This has been fascinating. So for everybody who comes back on a regular basis, meet us back here on October 26th.

We’re going to have Simon Cross joining us. And for everybody who would like to learn more about ProdPad Jump in, give it a try, get a demo we’re always happy to show you through. But also I just want to say again, big thank you to Randeep. Thank you for everything that you’ve done here today.

 Awesome. Thank you everybody and bye for now. 

Watch more of our Product Expert webinars