currentscurrents t1_j8em94v wrote
Reply to comment by That_Violinist_18 in The Inference Cost Of Search Disruption – Large Language Model Cost Analysis [D] by norcalnatv
Samsung's working on in-memory processing. This is still digital logic and Von Neumann, but by putting a bunch of tiny processors inside the memory chip, each has their own memory bus they can access in parallel.
Most research on non-Von-Neumann architectures is focused on SNNs. Both startups and big tech are working on analog SNN chips. So far these are proof of concept; they work and achieve extremely low power usage, but they're not at a big enough scale to compete with GPUs.
Viewing a single comment thread. View all comments