AmbulatingGiraffe
AmbulatingGiraffe t1_j1y2zs7 wrote
Reply to [D] Protecting your model in a place where models are not intellectual property? by nexflatline
I don’t have a quick answer but you might want to look into how the video game industry deals with preventing decompiling. If the code is using a python based framework then you might be out of luck but if it’s possible to use a compiled language then there are more options available to you.
AmbulatingGiraffe t1_j1ugwcm wrote
Reply to comment by dissident_right in NYC's AI bias law is delayed until April 2023, but when it comes into effect, NYC will be the first jurisdiction mandating an AI bias order in the world, revolutionizing the use of AI tools in recruiting by Background-Net-4715
This is objectively incorrect. One of the largest problems related to bias in AI is that accuracy is not distributed evenly across different groups. For instance, the COMPAS expose revealed that an algorithm being used to predict who would commit crimes had significantly higher false positive rates (saying someone would commit a crime who then didn’t) for black people. Similarly the accuracy was lower for predicting more serious violent crimes than misdemeanors or other petty offenses. It’s not enough to say that an algorithm is accurate therefore it’s not biased it’s just showing truths we don’t want. You have to look very very carefully at where exactly the model is wrong and if it’s systematically wrong for certain kinds of people/situations. There’s a reason this is one of the most active areas of research in the machine learning community. It’s an important and hard problem with no easy solution.
AmbulatingGiraffe t1_j21ojdt wrote
Reply to comment by I_will_delete_myself in [D] Protecting your model in a place where models are not intellectual property? by nexflatline
Honestly I don’t know enough about it to provide an informed comment. Maybe worth looking at for OP.