Can I make an api request to an endpoint in a lambda server?

Hello,

Can I make the server to be an inference server so I can make api request from my local machine to the server?

If I can is there any configuration I should edit because I tried to do it using FastAPI and uvicorn and it didn’t work.

thanks in advance.

On-demand instances don’t have any relevant restrictions. Make sure you have appropriate firewall rules in place.