Last week I was back at Oxford (or in the vernacular up at), for Module 3 of the Business of AI Diploma. The cohort is made up of fascinating people, from marketing executives, commodity traders, consultancy partners, lawyers, sales leaders, founders, investors, regulatory experts, UX designers, game experts, CIOs, M&A partners and more, from all corners of the map. I’m probably learning as much from them as I am from the course itself. I’m already identifying deal flow because of this new network. More importantly, I’ve made some new friends.
Module 1 and Module 2 write ups here.
My bow-tie tying skills have also returned.
The module was titled AI in Practice. Two lectures stood out for me on this module, for very different reasons.
AI and Healthcare
I’ve mentioned Dr Agni Orfanoudaki in a previous post. She is able to convey complex topics and field questions from this heterogeneous, loquacious and sometimes rambunctious audience with precision and discipline. She owns the room. This lecture looked at the opportunities for AI in Healthcare and Insurance. She spent some time discussing the work she did during Covid, developing a technique for allocating beds that was used by several hospitals. I lost a parent to Covid, this was a stark reminder that AI isn’t all about AGI, but it has a profound impact on our lives today. She explored the challenges of data quality, hidden biases, missing variables and more, showing how they built a successful model that had a genuine positive impact in saving lives.
Agni referred to two techniques: XG-Boost decision tree method and SHAP. XG- Boost is an ML technique that has been around for about 15 years or so. It makes use of multiple decision trees, but unlike a random forest, it uses gradient descent to tweak the model's parameters bit by bit to make the decision tree more accurate, by always moving in the direction that reduces errors the most. If you need help figuring out how Gradient boosting works (I did) check out this video. There is a paper here for those wanting learn more on the covid admissions use case and relevant techniques.
SHAP (SHapley Additive exPlanations) is a method used in machine learning to explain the output of a machine learning model. It relies on game theory (Matt Damon movie if you remember back that far). Claude helpfully tells me that SHAP is like breaking down a recipe (the model's prediction) into its ingredients (features), showing exactly how much each ingredient contributed to the final dish for each individual serving (prediction). This helps in understanding why the model made a particular prediction for a specific individual and which factors were most influential. Agni noted that in a medical context, having robust explainability encourages adoption by clinical staff and patients. If you would like a video on how SHAP works in more detail with a special bonus South African accent check out this one.
The second part of her lecture looked at the development of algorithmic insurance models. I would have liked to have spent a whole day on this. As AI does more on our behalf, the question of who is liable for AI’s mistakes becomes more important.
Capitalism, trade and technology innovation have relied on insurance since gosh, the early shipping industry (Lloyds Coffee Shop etc), and another parallel is car insurance.
“As machine learning algorithms start to get integrated into the decision-making process of companies and organizations, insurance products are being developed to protect their owners from liability risk.”
Agni and colleagues have been working on models to price this sort of insurance. My conjecture is that AI and product liability regulation are going to strengthen over time, so for some use cases, algorithms will be insured. The question they explore is how could such models be priced? Again, here’s the paper.
AI and Media
On the final morning, we had a lecture from Dr Alex Connock. If you show up to lecture in a Quentin Tarantino t-shirt, you need an A game. and Alex had one. He is a world leading expert in media, and it shows. Let’s face it, most academic slides are dull at best, and some are significantly worse than that. Alex’s presentation was a magnificent multimedia onslaught of 135 slides, many seemingly brand new. I can’t think of a more immersive presentation, his use of video, music and the microphone was remarkable. Even though the slides were brilliant, he managed to keep the room focused on him and his arguments. My classmate Leo Lo has an excellent write up here. Leo runs an Ad agency, so he is in the centre of this
My takeaway is that I simply have no idea what AI is going to do to the creative work. Is it making new jobs, killing jobs, enabling or destroying innovation, killing copyright, destroying the truth and democracy? I simply don’t know. And if the people that study this stuff everyday don’t know either, than I guess I have to be okay with that. What is clear though, the media industry is now an AI industry.
Alex got extra points from me for mentioning New Order. A friend recently gave me this first pressing the other day, best record cover ever. And a mighty fine record too.
I just had to go back to 1983 for this post’s song.
And as an extra bonus. Let’s go analogue.
I have developed a bit of library of books on AI, and at some point I’ll write up a couple of reviews. There are some bangers, but none quite as good as New Order.
Congratulations to the hackathon winners, and my special thanks to our class reps, Meryem and Sebastian. Your dedication to your office is quite remarkable. Oh and huge thanks to Jason for holding the fort at work.
I’m taking what I’m learning and using it to refine our investment strategy, and how I advise our portfolio companies. At times it seems things are shifting so quickly, so models are helpful in filtering signal from noise. I’m still figuring out my future of work and AI model, more to come.
Wonderfully summarised, Thomas. Thanks for the links to additional material along the way, neatly punctuated in the right places.