abaxeron
abaxeron t1_ixm13pu wrote
Reply to comment by Nihilblistic in Do planetary magnetic fields have a strong impact on electric devices? by Nihilblistic
Eddy currents! Always forget how they're called. Glad I could be helpful!
abaxeron t1_ixlu1o8 wrote
A current-based electric machine (say, a simple resistive heater or a stove) stationed on a planet with static magnetic field will work no different than it does on Earth; if the magnetic field is not moving, it's not inducing any currents.
Then things get weird. There's Hall effect that forces charges moving across a perpendicular magnetic field to stick to one side of the wire and away from the other (which, in practical sense, will mean that a planet with ultra-strong magnetic field will cause non-vertical wires to rust more quickly on one side).
Ferrite based generators (sometimes encountered in very old electric toys and roller blades / scooters with shiny LED lights) will stop working as intended at magnetic fields roughly ten thousand times stronger than Earth's, since such magnetic field will keep the rotor perpetually saturated in one and the same direction. 4 more times, and motors, generators, and high permeability iron alloys stop working for the same reason. Fast-moving vehicles will experience current being induced on their hull due to effects of homopolar generation (even if they move through relatively homogeneous magnetic field).
Specifically speaking of Jupiter, there's "Radio Jove" - emitted radio waves in the range of around 100 MHz that can be caught and listened to on simple household radio. Considering that these radio waves are detectable from Earth, they are insanely stronger at the source which is at least 588 million kilometers away (at Io's orbit, these radio waves will be 2 million times stronger). 100 MHz radio interference is perfectly capable of causing trouble here on Earth and will induce current on any luckily oriented straight wire around 3/4 meters long (at very short and very long lengths, induction will either be negligible, or destructively interfere with itself).
abaxeron t1_j4brylo wrote
Reply to What exactly is the process when someone "trains" an AI to learn or do something? by kindofaboveaverage
The simplest algorithm used for basically every student-level task in my youth was "backward propagation of errors".
To run this model, you need three things: a big multi-layered filter (which will be our "AI"; a set of matrices that the initial data is multiplied on, and an activation function to mix some non-linearity in), a sufficient set of input data, and a sufficient set of corresponding output data. Sufficient for the system to pick on the general task.
Basically you take initial (empty or random) filter, feed it a piece of input data, subtract output data from corresponding desired result (finding what we call "error", i.e. difference between actual and desired result), and then you go backwards through the filter, layer by layer, and with simple essentially arithmetic operations, adjust the coefficients in a way that IF you fed the same data again, the "error" would be smaller.
If you "overfeed" one and the same input to this model 10 million times, you'll end up with the system that can only generate correct result for this, specific input.
But, when you randomly shift between several thousand options of inputs, the filter ends up in "imperfect but generally optimal" state.
The miracle of this algorithm is that it keeps working no matter how small the adjustments are, as long as they are made in the right direction.
One thing to keep in mind is that this particular model works best when the neuron activation function is monotonic, and the complexity of the task is actually limited by the amount of layers.
As a student, I made a simple demonstration program on this principle that was designing isotropic beams of equal resistance in response to given forces. During this experiment, I have proven that such a program requires two layers (since the task at hand is essentially a double integration).
I'm putting this response in because no-one seems to have mentioned backwards propagation of errors; modern and complex AI systems, especially working on speech/text, actually use more complex algorithms; it's just that this one is most intuitive and easiest to understand for humans.