I tried calling the inference API with the following curl…
curl -sS https://api.lambda.ai/v1/completions \
-H "Authorization: Bearer <REDACTED>" \
-H "Content-Type: application/json" \
-d '{
"model": "llama-4-scout-17b-16e-instruct",
"prompt": "Computers are",
"temperature": 0
}'
…but this resulted in the following response:
{"message":"model ID was not provided","type":"invalid_request_error"}
The model ID is definitely right there in the request… so what should I do?