Great question! Rudin et al.’s approach elegantly builds an optimal decision tree through search. TM learns online, processing one example at a time, like a neural network. Also, like logistic regression, TM adds up evidence from different features, however, it builds non-linear logical rules, instead of operating on single features. TM also supports convolution for image processing and time series. It can also learn from penalties and rewards addressing the contextual bandit problem. Finally, TMs allow self-supervised learning by means of an auto-encoder. So, quite different from decision trees.
Thanks for the questions! I introduced the Tsetlin machine in 2018 as an interpretable and transparent alternative to deep learning, and it is getting increasingly popular, showing promising results in several domains. The paper reports the first approach to using Tsetlin machines for ECG classification, and it is fantastic that you see potential opportunities in myocardial infarction prediction. If you like, I can do an online tutorial on Tsetlin machines with you and your team to give you a headstart?
While the autoencoder can be used for self-supervised learning: https://arxiv.org/abs/2301.00709
Sounds like you are working on an interesting problem!
Hi u/SatoshiNotMe! To relate the Tsetlin machine to well-known techniques and challenges, I guess the following excerpt from the book could work:
"Recent research has brought increasingly accurate learning algorithms and powerful computation platforms. However, the accuracy gains come with escalating computation costs, and models are getting too complicated for humans to comprehend. Mounting computation costs make AI an asset for the few and impact the environment. Simultaneously, the obscurity of AI-driven decision-making raises ethical concerns. We are risking unfair, erroneous, and, in high-stakes domains, fatal decisions. Tsetlin machines address the following key challenges:
They are universal function approximators, like neural networks.
They are rule-based, like decision trees.
They are summation-based, like Naive Bayes classifier and logistic regression.
They are hardware-near, with low energy- and memory footprint.
As such, the Tsetlin machine is a general-purpose, interpretable, and low-energy machine learning approach."
Hi u/Academic-Persimmon53! If you would like to learn more about Tsetlin machines, the first chapter of the book I am currently writing is a great place to start: https://tsetlinmachine.org
olegranmo OP t1_j5xpnj2 wrote
Reply to comment by deeceeo in [R] Tsetlin Machine in Medical Research - Striking Differences Between Tsetlin Machine Interpretability and Deep Learning Attention by olegranmo
Great question! Rudin et al.’s approach elegantly builds an optimal decision tree through search. TM learns online, processing one example at a time, like a neural network. Also, like logistic regression, TM adds up evidence from different features, however, it builds non-linear logical rules, instead of operating on single features. TM also supports convolution for image processing and time series. It can also learn from penalties and rewards addressing the contextual bandit problem. Finally, TMs allow self-supervised learning by means of an auto-encoder. So, quite different from decision trees.