A neuromorphic chip allows for generative AI with low power consumption

https://english.news.cn/20250908/c9ffdf3f03774f1e91eabe20eb847553/c.html

https://www.nature.com/articles/s41467-024-47811-6

Scientists from the CAS Institute of Automation have introduced “SpikingBrain-1.0,” a large-scale model trained and inferred entirely on GPU computing. Unlike mainstream generative AI systems that rely on the resource-intensive Transformer architecture — where intelligence grows with ever-larger networks, computing budgets and datasets — the novel model enables highly efficient training on extremely low data volumes. Using only about 2 percent of the pre-training data required by mainstream large models, it achieves performance comparable to multiple open-source models on language understanding and reasoning challenges, according to the team.

By harnessing event-driven spiking neurons at the inference stage, one SpikingBrain variant is shown to deliver a 26.5-fold speed-up over Transformer architectures when generating the first token from a one-million-token context. The model’s ability to handle ultra-long sequences offers clear efficiency gains for tasks such as legal or medical document analysis, high-energy particle-physics experiments and DNA sequence modeling.

The research team has open-sourced the SpikingBrain model and launched a public test page, along with releasing a large-scale bilingual technical report.

Reported last year in Nature Communications, scientists from the institute, working with Swiss counterparts, developed an energy-efficient sensing-computing neuromorphic chip that mimics the neurons and synapses of the human brain. The chip, dubbed “Speck,” boasts an impressively low resting power consumption of just 0.42 milliwatts, consuming almost no energy when there is no input. The human brain, capable of processing incredibly intricate and expansive neural networks, operates with a total power consumption of merely 20 watts, significantly lower than that of current AI systems.

Most popular posts: