Submitted by FresckleFart19 t3_z2hr4c in MachineLearning
I have uploaded two repositories on github - the code was personal so it's pretty much undocumented but due to personal issues I currently can't work on them and maybe the ideas here will inspire someone.
The main ideas are:
- Seeing categories as ensembles of ml models with more complex sturcture than X->(Y1,Y2,...) and using commutative diagrams as optimizations objectives with equality of morpisms (=models) replaces with some loss/objective function.
https://github.com/BeNikis/Category-Theoretic-Model-Ensembles
- Using language models and some formal language for describing categories,automating the above work when we have some base category with some 2nd level (for example in a category with only tensors we could have two objects of different patches of an image of the same size,the shape of those patches can be '2nd level' types of the objects and we could apply any morphism that takes in the type of that object) we could automatically find pathways (compositions of models) that do what we want.or,if the category we;re working is for example is Hask,Haskell types and programs,this could be used in automated programming.
https://github.com/BeNikis/Manipulating-Categories-With-ML
3)I have this very general concept of a agent environment adjunction - an adjunction in category theory is a very loose but deep relationship between two categories,basically 'an isomorphism up to a specified morphism' . in the agent-environemnt case,the agen percieving the environmet is the forgetful functor (in reference to the mane free-forgetful adjunctions) because we unavoidably lose some information when we percieve with limited sensors,and inferring the overall state of the environment from the agents known information would be the free functor. Now,combining this with the above two ideas,the two categories could be the categories of categories of states of the environemnt and for the agent ml model ensembles,the adjunction itself could be seen as an optimization objective (the information from the sensors of the agent are injected into the category by the DataMorphism class in the first repo),and we could build better and better agent states by building up that categories with (co)limits,which again are fuzzified with some yet unknown unsupervised obejctive.
This idea is similar to what is already happening in both ML and CT - on the ML side we have autoencoders and diffusion models which go from environment->'agent' (some intemediary code)->back to environemnt,and in CT for example we have this paper on a syntax-semantics view of language models,which rings bells with similarities with the syntax-semantics adjunction in categorical logic:
https://arxiv.org/abs/2106.07890
I'm posting this due to personal stuff and because I'm currently on the edge of exhaustion working on this stuff,so maybe bringing these ideas up will not let them go to waste if they're valuable in the first place.
-xylon t1_ixgpklv wrote
...wtf?