Submitted by UberStone t3_1015pjo in MachineLearning
JimmyTheCrossEyedDog t1_j2lzmhs wrote
As someone who knows next to nothing about electronic components, can you provide some example inputs and outputs? Without knowing what the exact problem is, it's hard to determine feasibility.
Off the top of my head, if the symbolic language is quite simple (i.e., every symbol acts more or less independently of each other, so you can just tack the text for what each does one after another), you can essentially do this with optical character recognition or some computer vision approach and just use a simple set of rules to translate each visual detection into text. If the language of how these diagrams work is more complicated than that, though, it may not be so simple.
UberStone OP t1_j2m8zyl wrote
Good question. Basically every connection on a back panel would generate the connection type, signal type, label and input or output (or both). A simple example would be an AV Receiver with three HDMI inputs labeled VIDEO 1, VIDEO 2 and VIDEO 3. The ML/AL would recognize the actual HDMI pin, look at the corresponding label matching the pin and produce the following output.
IN-HDMI-HDMI-VIDEO1
IN-HDMI-HDMI-VIDEO2
IN-HDMI-HDMI-VIDEO3
This is good for about 80% of the components the other 20% of the connections are edge cases.
Viewing a single comment thread. View all comments