plasma_phys

plasma_phys t1_j4748z8 wrote

In the simplest case, you start with an untrained AI (some mathematical model with variable parameters) and training data for which you already know the desired output (supervised learning). Initially, the AI produces nonsense when given the training data, so you repeatedly make small changes to the parameters of the AI, making sure that the actual output gets closer and closer to the desired output. At some point, the actual output is close enough to the desired output that you stop - the AI has been trained, and when given data sufficiently similar to the training data will produce the desired output even though the AI has never encountered that specific data before.

It obviously gets more complicated, especially when you don't already know your desired output (unsupervised learning) or in more complex designs such as generative adversarial networks. Some machine learning approaches typically use specific algorithms for training, such as the Baum-Welch algorithm for Hidden Markov Models, while others may use generic optimization algorithms. In general though, the process of repeatedly making small changes and comparing the new result to the previous one is a largely universal part of training AI.

17