Submitted by valdanylchuk t3_y9ryrd in MachineLearning
idrajitsc t1_it7u7c2 wrote
Reply to comment by valdanylchuk in [D] Do any major ML research groups focus on policy-making applications? by valdanylchuk
You can say that about any problem, maybe some hypothetical very powerful AI can solve it better than we can. It isn't really a good reason in its own right to pursue something. Is there any real reason AI is well suited to this problem? It's hard to imagine a way to quantify all the important outcomes and encode all the important inputs for something as complicated as real world policy problems.
And some of the political problems don't admit a balance of interests, like in the US some politicians actively run on anti-government platforms because an ineffectual government gives more power to their donors. There's no real way to square that with a government that solves problems, they're diametrically opposed. The other poster is entirely right in that improving current policy proposals is nearly irrelevant to getting good policy implemented.
valdanylchuk OP t1_it86vex wrote
I agree my response was hand waving, but GP suggested just giving up any attempts of AI/ML based optimization proposals a priori, which I think is too strict. If we never attack these problems, then we can never win. And different people can try different angles of attack.
idrajitsc t1_it88yrd wrote
The thing is that there can be a cost to just giving things a go. Like the work which claims to use facial characteristics to predict personality traits or sexuality, or the recidivism predictors that just launder existing racist practices. There are so many existing examples of marginalized groups getting screwed over in surprising ways by ML algorithms. Then imagine the damage that could be done by society-wide policy proposals, and to hope that you could fully specify a problem that complex well enough to try to control those dangers?
It's not okay to just throw AI at an important problem to see what sticks, you need a well founded reason to believe that AI is capable of solving the problem you're posing and a very thorough analysis of the potential harms and how you're going to mitigate them.
And really, there's absolutely no reason to think that near-term AI has any business addressing this kind of problem. AI doesn't do, and isn't near, the kind of fuzzy, flexible reasoning and synthesized multi-domain expertise needed for this kind of work. The problem with metrics would be an overriding concern here.
RobbinDeBank t1_it865zk wrote
I would say the complicated nature of real world policy is why AI will eventually be capable of making better policy than humans. While economists can still produce optimized social and economic policies, they just can’t account for all the 100 different interest groups with different political motives in a real world scenario. AI system can do that due to the computing power they process. I think AI can be a key to making incremental societal progress. Instead of the current situation where oligarchs get all the pie, the AI solution could leave them with a good chunk of the pie while the public can now have a decent chunk too. That’s incremental progress, not ideal, but achievable.
idrajitsc t1_it8b8p2 wrote
I mean, economists can account for competing concerns. They have been for centuries. The problem isn't a lack of processing power, it's the fact that those concerns are competing. You have to make subjective decisions which favor some and harm others.
Also you're just kind of asserting that AI will be able to solve problems that there's no reason to believe it will, scaling compute power is not the end all be all of problem solving. What kind of objective/reward function do you think you can write that does even a half decent job of encompassing the impact of social and economic policy on all those different interest groups? Existing AI methods just are not at all amenable to a problem like this.
RobbinDeBank t1_it8eogo wrote
Let’s say we have this problem with 100 sides, the public and 99 interest groups. In the ideal world, we want to maximize public good (low unemployment, high economic growth, high income, low financial and social inequality, etc) at all costs and interest groups should not have any more power than the average individual. However, we all know this is not the case in the real world, and all those interest groups have disproportionate amount of political power. Now the problem becomes a constraint optimization problem. We still need to maximize public good, but now we have to take into account the constraints caused by these interest groups. This constraint could be or related to the amount of votes (maybe we need 51% of votes, maybe we need more like 60% or 66%). So that’s our main constraint that must be met: partially satisfy the interest groups just enough to achieve majority votes to pass the policy. This is essentially a trade off, sacrificing a part of the ideally optimized public good to gain enough votes to get the policy passed. This constraint then has to be broken down even further to account of each of the 99 interest groups. Together this is a huge and complex constraint optimization problem. The solution we get could be sth like “let’s give in to most of the demands of 90 groups, fuck the other 9, now we have enough votes and the public will benefit a whole lot from this.”
That is a rough idea from me without expert domain knowledge. With the funding of major AI labs like DeepMind and the expertise knowledge they can have, the problem can definitely be solved, in a real world case. Human economists can only write a solution to a smaller problem within 1 industry for example, and not a solution to this complex a problem.
idrajitsc t1_it8iuw2 wrote
It is absolutely not true that the problem can "definitely be solved." You have no grounds to make such a ridiculously confident statement about such a complicated problem. AI is not magic which can solve any sort of problem you show if you just sacrifice enough GPUs to the ML god.
The notion of constrained optimization is not exactly new, that isn't the hard part. And while solving a constrained multi objective optimization problem is generally gonna be np-hard, if it even has a well-defined solution, even that isn't actually the hard part.
The problem is figuring out what the inputs and measured outcomes should even be and then getting them into a form that an AI can actually process. I was not asking you to tell me that the objective function would be an optimization problem; that's what they all are. I was asking you what the actual objective and actual constraints are. Because there is no way that you can possibly summarize every important impact of an economic policy in an objective function, much less doing so while differentiating it across different interest groups. Nor could you actually encode all of the input information which might be relevant.
And then what would you even train on if you could accomplish that already impossible task? It's not like we have a large or terrible diverse set of worked examples of fully characterized policies and outcomes. And if you wanted to take a more unsupervised route then it basically amounts to accurately simulating an economy, which in itself would be worth all the nobel prizes.
Viewing a single comment thread. View all comments