Just got the latest Dato Blog, "How to evaluate ML models, Part 3: Validation and Offline Testing." This made me add the link to my link collection there on the right. This blog is well worth subscribing to just to get "caught up" on the language I was ranting about yesterday.
As most of "our" work is / has been done with static models, we've all smashed into that wall of trying to correct a bad model, analyze what's variant or driving variance, and what new factors are impacting load. When capital is expended based upon a static model, trying to change that model runs into big-o buckaroos. One of the really big deals I'm getting from my reading is the availability, and granularity of historical data. Honestly, the more I read, the more historical data is what appears to my simple mind at this time as what makes "Machine Learning" possible.
This article points out that very specific "difference." While it is discussing social data, performance data driving "Machine Learning" will drive a more fluid idea of a model. And if models are fluid, how will costs be assigned? What is acceptable variance? Alot of interesting questions to think about. Anyways, this is one of the least confusing threads I've found about machine learning and is well-worth reading just to update your language, guys. Really. No, don't go out there wearing plaid pants with a waist up to your arm pits. No. White socks and Birkenstocks are never a good idea. Go shave your nose hair. Really.
No comments:
Post a Comment