Reflections on MLOps

Reflections on MLOps

A rare event! Last night, I attended the London MLOps meetup in person, something that has not been possible for getting on for two years. Despite a massive run from IT folk on a limited supply of pizza (clearly making up for lost time!), it was great to get out to a well-populated and topical event.  My colleague Phil Basford was presenting, along with a speaker from Marks & Spencer. 

It gave me a chance to reflect on the state of and adoption of MLOps…so here’s a quick brain dump.

 

You don’t really (realise you) need it until you hit scale 

…but that scale (in terms of number of models) is becoming more and more common.  When we started Inawisdom, ML models in production were like hen’s teeth – I like to think Inawisdom has been part of the movement to change that.

When you’ve got one model in production, you can live with ad hoc processes, calling on Dave the Data Scientist when the model starts doing weird stuff. When you’ve got 100s of models in production… you can’t. Or more precisely, you won’t get to 100s of models in production without recognising this and raising your MLOps game.

 

The MLOps technology tooling landscape is complex and changing fast 

This means that what is right for you and best practice today almost certainly won’t be in two years time… so your approach needs to accept and reflect that, i.e. favour agility and flexibility to change and adapt over excessive process optimisation.

The hyperscalers and surrounding vendor ecosystem are evolving fast, as is the open-source space, so we are almost guaranteed to be building in technical debt and creating next year’s legacy implementations. How we will laugh in a few years time! It reminds me a little of the process public cloud implementation best practices went through over the last decade.

The only approach is to recognise and embrace it. You can’t drive the maturity of an entire industry segment on your own.

 

Batch is where it’s at for most organisations 

Despite all the focus on super low-latency real-time inference use cases, the predominant inference model is batch in nature. Good old batch processing is simplest from an engineering and operations standpoint and addresses most business use cases, especially when coupled with its close cousin micro-batch. And it’s generally a lot cheaper at run time.

That’s not to say that organisations don’t need real-time inference – it may only represent 10% of production deployment needs (a guess), but that means we still need design patterns and MLOps processes to support it, of course.

 

It’s a journey with no (near) end, and that’s fine 

Something from Nick’s talk that really struck me was that he had a very long list of areas he still wanted to work on for M&S’s MLOps processes and execution. So it’s not so much never “done”, it’s more that there’s always a backlog of items from the last retrospective to be tackled.

This could be seen as negative, but as the business needs evolve (and the technology landscape as described above), it’s a positive to have a living, adaptable and learning process that can evolve with it.

 

Conclusions

Last night’s event was very well-attended, which reflects the interest and relevance of this topic – the world has woken up to the problem that training the ML models is only the start of the journey, albeit still the really high-value part of it (i.e. no model, no business value).

It’s a super-interesting and rapidly evolving space to be working in! 

 

Catch up on both talks from the event here: MLOps London November 2021 – Talks from M&S and Inawisdom 

Robin Meehan
robin@inawisdom.com
No Comments

Post A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.