Good advice, but I think it should talk more about the distinction between the training objective and the true objective. For classic machine learning problems, like speech recognition or face detection, these were so close that we didn't even notice there was a difference. However, now ML models are being trained to predict clicks or or other proxies of "engagement" and these can be wildly divergent from the humane objectives we want in our products. In these cases it's really important to understand the gap between what you really want and what you can encode into an objective function.
Good advice, but I think it should talk more about the distinction between the training objective and the true objective. For classic machine learning problems, like speech recognition or face detection, these were so close that we didn't even notice there was a difference. However, now ML models are being trained to predict clicks or or other proxies of "engagement" and these can be wildly divergent from the humane objectives we want in our products. In these cases it's really important to understand the gap between what you really want and what you can encode into an objective function.
This appears to be pretty old (it has a reference to Google Plus) but seems like good generic advice.
Should say "(2018)".
A rare example of practices from Google which would be transferable to other organizations.
Rule 1 is golden and oft forgotten.