I don’t have much experience with Kubernetes, but two things come to mind here:
If you use the sqlite backend, make sure you’re placing the database in a persistent volume. My guess is that if you used the default location (in the home directory), that state won’t be persistent, so you’ll lose data. I would probably switch to a different backend — I think sqlite is likely to be a poor fit for Kubernetes.
It looks to me like you’ve got some setting that expects an integer receiving the whole connection string. Perhaps you’re passing the connection string to an argument that expects only the port?
Taking a little step back here, I’m guessing that you want to have a setup where you run some command and Kubernetes launches a new Prodigy task, and you get the URL of the task, right? This workflow requires a couple of steps of indirection.
If you just launch Prodigy tasks one-by-one on your laptop, it listens on localhost and you can point your browser to localhost. But if you’re launching tasks on remote machines, you probably want a reverse proxy, which should map the localhost URLs to something you can access. And then if you’re also starting and stopping tasks under automation, you probably also want something that will keep track of all the Prodigy tasks, allocate them names, get the ports they’re listening on, and organise the mapping for your reverse proxy.
For Prodigy Scale, we’re using Nomad to launch the Prodigy tasks, with consul for service discovery. We then use Traefik as the reverse proxy, which has a neat integration with consul’s service catalog.
I’m not sure what the favoured service discovery solution is for Kubernetes. It looks like consul has a reasonable integration: https://www.consul.io/docs/platform/k8s/run.html
These are the docs for Traefik with consul catalog: https://docs.traefik.io/configuration/backends/consulcatalog/