QristopherQuixote

QristopherQuixote t1_j8ogjgs wrote

Do you need a history of all the things SciFi got wrong? Asimov, Heinlein, etc?

Contagion is based on actual science. Read books by Robin Cook if you want to see an actual scientist write science fiction. His book "vector" predicted the use of Anthrax as a terrorist weapon. However, folks like Michael Crichton have been spectacularly wrong even though he had an MD. Crichton was a science skeptic in some respects who questioned bans on DDT and wrote a book that made a mockery of environmental activism. He also wrote a book against AI called "Prey" which had a swarm intelligence using nanobots that was beyond silly.

We don't even know if strong AI is possible. It doesn't appear to be necessary for us to get value from task based AI. Artificial neural nets are everywhere including in cruise control in cars, smart thermostats, etc. Some smart phones like the Pixel have them. Components of AI are being used more and more.

We cannot confuse complexity with strong AI. Very complex AI systems can still be weak task based AI. Consciousness and independent action are not part of AI now. No existing AI system can be considered to be "thinking." This idea that an AI overlord will emerge to override human action is pure science fiction. The human brain has trillions of interconnections between billions of neurons with an incredible input system. No computer can match it yet.

2

QristopherQuixote t1_j8nw2c1 wrote

You shouldn't rely on science fiction to be your guide on how AI will evolve in the future. Skynet says it will be evil. In I, Robot it was insane. In Bicentennial man, it became fully human. In Star Wars it was benign and essentially slavish. The robots in Interstellar were essentially assistants who did not act independently. In Transcendence a human mind was "uploaded" creating a strong AI. In Chappy, AI happened by accident, resulting in strong AI formed in a robot and by creating a digital copy of a human mind.

Strong AI doesn't exist... yet.

2

QristopherQuixote t1_j8nuhs2 wrote

Yup. His flailing around with engineers at Twitter looked like a Dilbert cartoon with the pointy-haired boss trying to talk about code.

AI seems like magic until you look under the hood. There's an enormous amount of human intelligence and judgment that goes into tweaking AIs to perform well. My first neural network was a class project in grad school to find a nose on a human face. When I got done and had it working, I was happy and also disappointed to learn how they actually worked. It drove home for me the differences between weak and strong AI.

3

QristopherQuixote t1_j8nqhxa wrote

Strong AI implies consciousness and self-awareness. This has been the holy grail of AI since the 1970s. Neural networks are function emulators where input produces the desired output. Neural networks use classified or labeled training data and feedback to self correct (back propagation) until their functional output is acceptable. Deep learning and layered networks are leveraging models that were already trained to produce a more complex network. There are several different types of neural networks like convolutional, feed forward, etc. By using a multi model and filtering approaches, models can be combined so that more and more complex tasks can be accomplished. For example, driving involves several models working in concert like one that determines a road type, a few more for feature extraction, etc. Many statistical models such as clustering and regression are called “machine learning” and AI, even though they weren’t when I first learned them. Many of the original AI systems were rules based and were called “expert systems.” However, how these techniques produce outputs is dramatically different than a brain. Mimicking human behavior and capabilities is very different from possessing them like any creature with a brain.

2

QristopherQuixote t1_j8bohob wrote

Look up Piaget's stages of cognitive development. The truth is we are born pre-wired to learn certain things very quickly. Immanuel Kant was also a big proponent of this type of theory, which he spelled out in his very long and boring treatises. For example, we have a cognitive and neural framework biased to learn language. However, there is a critical period that ends between 6 & 8 that after which it becomes much harder to learn a new language. We even have specific areas in the brain that where meaning and grammar are processed (the Broca area is for the production of speech and Wernicke area is for comprehension of speech). Language also shapes the way we organize our thoughts and process information. We are not born with concepts or ideas, but our mind is configured to integrate some types of information quickly and in specific ways. Our mind/brain is a combination of native abilities and how they are developed based on stimulation.

30

QristopherQuixote t1_j76nuad wrote

Where did you do your graduate work to develop the background necessary to evaluate the work of hundreds if not thousands of PhDs with whom you disagree? I did mine at a big ten university. My first science job was sequencing a bacteriophage now used in gene therapy. However, I am sure your “critical thinking” will allow you to overcome any gaps in your education and experience.

1

QristopherQuixote t1_j72r6wy wrote

Everything around epidemiology is an estimate, and outcomes have to be measured at the population level due to variance between individuals, which is why anecdotes never qualify as data. You can certainly compare populations who have been vaxxed vs those how have not to build mortality and morbidity models. The issues is the completeness of the variables around each member of both groups, which is the core data collection issue of any population level medical study around humans.

This is easier to do and more reliable than you are indicating when done at the population level. Yes, I have done vaccine efficacy research in the past when I doing systems work in vaccine registries. I have built statistical models and still do. My last gig before my current job was building algorithms around pathogen detection.

0

QristopherQuixote t1_j5ejbjc wrote

Lie. There are literally hundreds of studies showing the excess deaths in 2020 and 2021, but that wouldn’t fit your antivax rhetoric.

https://www.cdc.gov/mmwr/volumes/70/wr/mm7015a4.htm

https://jamanetwork.com/journals/jama/fullarticle/2778361

https://www.nature.com/articles/s41598-022-21844-7

1

QristopherQuixote t1_j3lp44r wrote

Drones would never work in my neighborhood. We have too many trees, above ground powerlines with poles, and other cables such as TV, internet, etc. We also have winds, including strong gusts in the winter and summer. There would be narrow windows where you could use flying drones and only in certain relatively new neighborhoods with many fewer obstructions than older more established neighborhoods. We also have some large raptors who patrol the neighborhood. I would love to see the drones and the Red Tailed hawk who keeps our small animals and small birds wary mix it up :)

I could see automated vehicle deliveries becoming a thing where I live, but not airborne drones.

2

QristopherQuixote t1_j275skx wrote

This might work for government, but what about the huge number of companies that use BYOD for phones, giving their employees a stipend for their phones? Tik Tok is a security issue. Researchers have found inactive malicious code that records data entered into sites loaded from the in app browser (ads that pull up web sites or other links). The Chinese government owns a stake in ByteDance and can ask for data at any time.

It is unclear if ByteDance’s long delayed plans to host US data with Oracle will be enough to address these concerns. However, people love Tik Tok and the US is unlikely to stop its use. The best outcome would be a US competitor. I doubt that’s any that are currently in use.

1

QristopherQuixote t1_j0ynel8 wrote

AI is task oriented, not a mind replacement. Strong AI is the equivalent of a human mind with potentially more computing power. Such an entity has never been created. We don’t even know how our minds work let alone how to code one to run on a chip or set of chips. You could probably train models to make certain types of decisions, and these decisions would be more consistent and objective, but to make one that replaced humans outright with the breadth of decisions they make is not yet possible.

2