fullstackai
fullstackai t1_jcopazt wrote
Reply to comment by banatage in [Discussion] Future of ML after chatGPT. by [deleted]
100% agree Also any AI that requires sensor data (e.g., in manufacturing) cannot easily be placed by foundation models.
fullstackai t1_jcokcsq wrote
I treat code artifacts of ML pipelines like any other software. I aim for 100% test coverage. Probably a bit controversial, but I always keep a small amount of example data in the repo for unit and integration tests. Could also be downloaded from blob in the CI pipeline, but repo size is usually not the limiting factor.
fullstackai t1_jcttgyd wrote
Reply to comment by gamerx88 in [D] Unit and Integration Testing for ML Pipelines by Fender6969
Should have been more precise. 100% of what goes into any pipeline or the deployment gets tested. We deploy many models on the edge in manufacturing. If the model fails, the production line might stand still. Can't risk that.