WHAT DOES FEMTOSENSE OFFER?
We offer efficient AI solutions for running sparse neural networks on edge devices, spanning accelerator hardware to sparse algorithms. Our platform can be itemized into a few things, some of which you may need, some of which you may not, depending on your team size, expertise, budget, etc:
-
Bespoke neural networks
-
Model optimization tools
-
Model performance simulator
-
Compilers
-
Scalable IP core designs
-
Co-processor chips
-
Project planning, engineering, and advisory
-
CAN THE SPU DO COMPUTER VISION?
SPU-001 supports inference on audio, speech, and other 1-D time-series data. Please inquire about our list of supported operators and layers for SPU-001; you may be able to realize your application with what’s currently supported. Our second iteration (SPU-002) supports vision and NLP tasks, as well as audio. SPU-002 will be available for testing in 2024, but you can inquire about the design now and start planning.
CAN I RUN DENSE MODELS ON THE SPU?
Yes you can, but a sparse model will give you a much better SWAP (Size, Weight, Area, Power) performance trade off. In many cases, introducing sparsity will have a marginal effect on model accuracy if you sparsify using techniques included in our optimization toolkit.
If your dense model is not yet trained, we recommend training it with a sparsity regularizer and quantization aware training for max performance.
If your dense model is pre-trained and quantized to SPU-supported precisions, you can either run it as is, perform post-training sparsification, or, retrain it fully or from a model checkpoint using sparsity regularization and/or quantization aware training. If you are fine tuning your pre-trained model on a new dataset, that may be a good time to introduce sparsity and quantization too.
CAN I RUN OFF-THE-SHELF MODELS ON THE SPU?
Yes you can, assuming the layers and operators are supported, and the neural network is quantized to SPU-supported precisions. For a list of supported operators, layers, and precisions, please request a specification sheet on our CONTACT US form. You may also refer to the “CAN I RUN DENSE MODELS ON SPU” question above for info about running models on the SPU.
DO YOU HAVE OPEN POSITIONS?
Yes. Send us an email with your resumé or CV. We’re always looking driven, curious, and ambitious people.
WHAT'S A "FEMTOSENSE"?
We’re glad you asked. Femto is the SI prefix for 10^-15. When a neuron fires in the human brain, the amount of energy dissipated is on the order of femtojoules. Femtosense technology is inspired by research conducted at the Stanford University Brains in Silicon Lab, where our founders created a spiking neural network chip, the synaptic operations of which were on the order of, you guessed it, femtojoules. Since many inference problems are also sensing problems, Femtosense seemed like a good idea at the time.
WHAT'S WITH THAT LOGO?
Our logo was drawn by our CTO and Co-founder, Alex Neckar. It’s a depiction of a self-connecting neuron, referencing the recurrent connections in RNNs, and inspired by the Ouroboros, an ancient symbol depicting a serpent or dragon eating its own tail. We call it the Neuroboros; we think it’s pretty neat.