Skip to main content

Syntiant Achieves 50% Acceleration of Large Language Models for Edge Devices

IRVINE, Calif., Jan. 08, 2024 (GLOBE NEWSWIRE) -- Syntiant Corp., a leader in edge AI deployment, today announced results of its first optimizations that reduce the computational footprint of leading large language model (LLM) architectures, enabling these massive neural networks to run on cloud-free, always-on devices at the edge of networks.

“We are focused on making edge AI a reality, bringing low power, highly accurate intelligence to edge devices,” said Kurt Busch, CEO at Syntiant. “Our core model optimizations enable considerable processing time acceleration across a range of compute platforms. Those same optimizations, which Syntiant has deployed across millions of devices worldwide, have been successfully applied to LLMs, bringing conversational speech to the edge, serving as the new interface between humans and machines.”

Syntiant used a novel algorithm to determine the sparsity fraction of LLMs to generate significant speedups in output token generation and reduce memory footprint. The company achieved 50% sparsity with minimal accuracy loss when computing with 8- bit quantized weights, significantly improving interpretability and processing power, while reducing cloud costs. These improvements, along with a custom SIMD (single instruction/multiple data) kernel and several other algorithmic innovations, enable Syntiant to achieve a 50% increase in output token generation speed on a LLaMa-7B benchmark.

“Syntiant continues to lead the category of low power NLP with production-ready solutions that directly address voice AI workload efficiency and could enable smarter conversational endpoints with generative AI. At M12, Microsoft’s Venture Fund, we’re proud to back Syntiant who we feel will help establish a ‘first mile’ standard for future speech-to-speech AI use cases,” said Michael Stewart, partner at M12.

Demoed live at CES 2024 in Las Vegas, Syntiant’s optimizations demonstrate an adjacent class of LLMs that run entirely at the edge. The ability to enable significant processing time acceleration brings meaningful benefits to end users from both latency and privacy perspectives, across a wide variety of consumer and commercial use cases, from earbuds to automobiles.

CES 2024
Syntiant will be demonstrating its latest innovations for edge-deployed deep learning solutions for always-on vision, audio and sensing applications at the Venetian Palazzo Hospitality Suites from January 9-12. Visit www.syntiant.com or contact info@syntiant.com to schedule a demo of the company’s technology being deployed in smart homes, teleconferencing solutions and event detection devices, among other use cases.

About Syntiant    
Founded in 2017 and headquartered in Irvine, Calif., Syntiant Corp. is a leader in delivering hardware and software solutions for edge AI deployment. The company’s purpose-built silicon and hardware-agnostic models are being deployed globally to power edge AI speech, audio, sensor and vision applications across a wide range of consumer and industrial use cases, from earbuds to automobiles. Syntiant’s advanced chip solutions merge deep learning with semiconductor design to produce ultra-low-power, high performance, deep neural network processors. Syntiant also provides compute-efficient software solutions with proprietary model architectures that enable world-leading inference speed and minimized memory footprint across a broad range of processors. The company is backed by several of the world’s leading strategic and financial investors including Intel Capital, Microsoft’s M12, Applied Ventures, Robert Bosch Venture Capital, the Amazon Alexa Fund and Atlantic Bridge Capital. More information on the company can be found by visiting www.syntiant.com or by following Syntiant on X @Syntiantcorp or LinkedIn.    

Contact:

George Medici
PondelWilkinson
gmedici@pondel.com
310.279.5968


Primary Logo

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.