Hyperparameters that can save your AWS bills
Posts about:
Once your training runs become material in terms of wall-clock time and hardware budget, it's time to look at improving your batch size. If your batch size is too small, your training runs are taking longer than necessary and you are wasting money. And if your batch size is too large, you are training with more expensive hardware than you need.
Percent top-1 error reductions achieved when moving from Google Vertex to Masterful AI.
It’s hard to stay current and maintain competency in deep learning. It’s a young and fast growing field, which means that groundbreaking research and innovations are coming out really rapidly. But at Masterful, we don’t have a choice: we have to stay current because the promise we make to developers is that our platform automatically delivers state-of-the-art approaches for computer vision models (CV) in a robust and scalable way.