Is DeepSeek R1 and other models which are not indicated as F8 are running on full precision on the inference API?

Me and my co-author have a Lambda Grant for running evals on state-of-art models for our research paper. I have encountered Lambda Inference API and it looks solid as the price is pretty cheap which means we can finish off our experiments in the existsing grant budget without extending it. But, when I looked into the Inference | Lambda I realised Lambda was running the F8 version for most models that we were interested-in.

Which is not ideal, we are looking for at least an accuracy of bfloat16. Because that’s what industry standards rely-on. Our experiments are pretty much sensitive to accuracy and other factors so we want to be very certain that we have allowed to use the best configuration possible.

I wanted to know if Deepseek R1 and similar models only have F8 quant or they also being offered in bfloat16 and other precisions as well. And if it is possible to add bfloat16 support and also Qwen3-238B-A3B and Command-A on the inference API as well?