FAQ

employee using headset customer service job asnwering call about telemarketing sales male operator working call center office help clients helpline 1

What can we help you find?

We offer efficient AI solutions for running sparse neural networks on edge devices, spanning accelerator hardware to sparse algorithms. Our platform can be itemized into a few things, some of which you may need, some of which you may not, depending on your team size, expertise, budget, etc:

  1. Bespoke neural networks

  2. Model optimization tools

  3. Model performance simulator

  4. Compilers

  5. Scalable IP core designs

  6. Co-processor chips

  7. Project planning, engineering, and advisory

SPU-001 supports inference on audio, speech, and other 1-D time-series data. Please inquire about our list of supported operators and layers for SPU-001; you may be able to realize your application with what’s currently supported. Our second iteration (SPU-002) supports vision and NLP tasks, as well as audio. SPU-002 will be available for testing in 2023, but you can inquire about the design now and start planning.

Yes you can, but a sparse model will give you a much better SWAP (Size, Weight, Area, Power) performance trade off. In many cases, introducing sparsity will have a marginal effect on model accuracy if you sparsify using techniques included in our optimization toolkit.

If your dense model is not yet trained, we recommend training it with a sparsity regularizer and quantization aware training for max performance.

If your dense model is pre-trained and quantized to SPU-supported precisions, you can either run it as is, perform post-training sparsification, or, retrain it fully or from a model checkpoint using sparsity regularization and/or quantization aware training. If you are fine tuning your pre-trained model on a new dataset, that may be a good time to introduce sparsity and quantization too.

Yes you can, assuming the layers and operators are supported, and the neural network is quantized to SPU-supported precisions. For a list of supported operators, layers, and precisions, please request a specification sheet on our CONTACT US form. You may also refer to the “CAN I RUN DENSE MODELS ON SPU” question above for info about running models on the SPU.

Yes. Send us an email with your resumé or CV.  We’re always looking driven, curious, and ambitious people.

We’re glad you asked. Femto is the SI prefix for 10^-15. When a neuron fires in the human brain, the amount of energy dissipated is on the order of femtojoules. Femtosense technology is inspired by research conducted at the Stanford University Brains in Silicon Lab, where our founders created a spiking neural network chip, the synaptic operations of which were on the order of, you guessed it, femtojoules. Since many inference problems are also sensing problems, Femtosense seemed like a good idea at the time.

Our logo was drawn by our CTO and Co-founder, Alex Neckar. It’s a depiction of a self-connecting neuron, referencing the recurrent connections in RNNs and other feedback type neural networks. We think it’s pretty neat.