ces 2024 logo
ces 2024 logo

Femtosense x CES2024

Sparse AI for the Real-time Edge

Silicon and IP
Software Tools
Neural Networks

WHAT WE DO

Silicon and IP

Run AI applications <mW by skipping zeros connections and activations. Deploy as a co-processor chip or tileable IP

Software Tools

Develop your own compressed sparse models in PyTorch and deploy to the SPU with a Python API

Neural Networks

Sparsified, quantized, fine-tuned neural networks for applications like speech enhancement, keyword spotting, neural beamforming and more!

APPLICATIONS WE’VE ENABLED

Real-time speech enhancement <1mW

Always-on voice interface ~100µW

Continuous Anomaly Detection

AI Noise Cancellation Demo

Femtosense was at CES 2022 running an ultra-low-latency speech enhancement demo. A video is nice, but hearing is believing.Reach out to experience our latest version live and in-person. Visit us at CES 2023 if you are around!

hand
hand
chip
chip
chip for web
chip for web

SPU-001 Now Available

Introducing SPU-001, the world’s first dual-sparsity AI accelerator for smaller electronic devices. Bring more functionality to your products without affecting battery life or cost.

MORE INFO

The Sparse Processing Unit

A Hyper-Efficient Hardware and Software Solution for Edge AI

The Sparse Processing Unit

A Hyper-Efficient Hardware and Software Solution for Edge AI

The Sparse Processing Unit

A Hyper-Efficient Hardware and Software Solution for Edge AI

SPU HARDWARE PLATFORM

We’ve built our hardware platform to achieve the efficiency of an ASIC, while retaining the flexibility of a general purpose accelerator. The SPU is easy to program, easy to simulate, and easy to deploy, allowing engineers and product managers to get innovative, class-leading products to market quicker.

Efficiency

10x larger models| 100x efficiency

First-class hardware support for dual-sparsity neural networks for low memory footprint and high efficiency.

Scalability

A wider range of applications and scales

Tileable, all-digital core design with 512 kB per core. Can be ported to different process nodes.

Usability

Less time deploying, more time designing

Deploy as a co-processor or IP from PyTorch. Iterate with the model performance simulator.

Flexibility

Unrestricted to build the impossible

CNNs, RNNs, Transformers, and custom models supported with fine-grain optimization tools.

FEMTOSENSE SDK

We’ve built our software development platform to help companies of all sizes deploy optimal sparse AI models for tomorrow’s applications and form factors. Our SDK contains advanced sparse model optimization tools, a custom compiler, and a fast performance simulator. It’s everything you need from exploration to deployment.

Widely-used ML Frameworks

Develop and deploy networks from high-level Python frameworks like PyTorch.

Easy Optimization

Easily prune, quantize, and fine tune sparse models with our model optimization tools

Rapid Development

Estimate energy, latency, throughput, and model footprint from a Python API

Get updates

Be the first to know about new products and features

Notice
This website stores cookies on your computer. These cookies, as specified in our Privacy Policy, are used to collect information about how you interact with our website. We use this information in order to improve and customize your browsing experiences.

Use the “Accept” button to consent to the use of such technologies. Use the “Reject” button to continue without accepting.

Privacy Preferences

We and selected third parties use cookies or similar technologies to store cookies on your computer for technical purposes and, with your consent, for other purposes such as website analytics.

Allow All
Manage Consent Preferences
  • Strictly necessary cookies
    Always Active

    These trackers are used for activities that are strictly necessary to operate or deliver the service you requested from us and, therefore, do not require you to consent.

  • Analytical / performance cookies

    These trackers help us to measure traffic and analyze your behavior with the goal of improving our service.
    Cookies Details

Save