Containers in restarting state

Hi,
i am new to bitwarden , i installed it on a esxi host
2 vCPUs

Memory

2 GB

Hard disk 1

20 GB

Are the specs of the VM

I installed using the bitwared sh deploy file
all went normally up until i do ./bitwaren.sh start
and check container status with docker ps

It states all containers except
Admin
SSO
And identity are healty
The 3 i naned are contant in restarting state
I reinstalled the full VM with ubuntu 20.04 lts and redownloaded the deployment script but still its in this state.

[email protected]:~/bitwarden$ docker ps
CONTAINER ID   IMAGE                            COMMAND            CREATED         STATUS                           PORTS                                                                                    NAMES
d0a9806651dc   bitwarden/nginx:1.42.3           "/entrypoint.sh"   2 minutes ago   Up 2 minutes (healthy)           80/tcp, 0.0.0.0:80->8080/tcp, :::80->8080/tcp, 0.0.0.0:443->8443/tcp, :::443->8443/tcp   bitwarden-nginx
a712265a4345   bitwarden/admin:1.42.3           "/entrypoint.sh"   2 minutes ago   Restarting (139) 7 seconds ago                                                                                            bitwarden-admin
324ad7217546   bitwarden/portal:1.42.3          "/entrypoint.sh"   2 minutes ago   Up 2 minutes (healthy)           5000/tcp                                                                                 bitwarden-portal
57636c7edd1a   bitwarden/web:2.22.3             "/entrypoint.sh"   2 minutes ago   Up 2 minutes (healthy)                                                                                                    bitwarden-web
9844d607ecdb   bitwarden/identity:1.42.3        "/entrypoint.sh"   2 minutes ago   Restarting (139) 5 seconds ago                                                                                            bitwarden-identity
41923e1146a1   bitwarden/events:1.42.3          "/entrypoint.sh"   2 minutes ago   Up 2 minutes (unhealthy)         5000/tcp                                                                                 bitwarden-events
58b2b600a06b   bitwarden/api:1.42.3             "/entrypoint.sh"   2 minutes ago   Up 2 minutes (healthy)           5000/tcp                                                                                 bitwarden-api
f705855a2538   bitwarden/attachments:1.42.3     "/entrypoint.sh"   2 minutes ago   Up 2 minutes (healthy)                                                                                                    bitwarden-attachments
74c09e120b13   bitwarden/notifications:1.42.3   "/entrypoint.sh"   2 minutes ago   Up 2 minutes (healthy)           5000/tcp                                                                                 bitwarden-notifications
b48ec4ad073f   bitwarden/sso:1.42.3             "/entrypoint.sh"   2 minutes ago   Up 2 minutes (unhealthy)         5000/tcp                                                                                 bitwarden-sso
4dc36a79f16d   bitwarden/icons:1.42.3           "/entrypoint.sh"   2 minutes ago   Up 2 minutes (healthy)           5000/tcp                                                                                 bitwarden-icons
dd2d38eb645a   bitwarden/mssql:1.42.3           "/entrypoint.sh"   2 minutes ago   Up 2 minutes (healthy)                                                                                                    bitwarden-mssql

All envoirment values except yubikey and the other api thing with is paid are setup in the golbal env override file

Same here, I just installed Bitwarden on a new server, looking to migrate from my old server which is still running.
I have the identity and admin containers restarting, but also the events and sso containers showing ‘unhealthy’ status (but running).

I have been running another instance on another server for years without any such issues. I (think I) have troubleshooted as much as I can - checked permissions, config.yml, global.override.env, and docker-compose.yml, as far as I can tell nothing bad there.

Container logs for identity and admin containers show this:

crit: Microsoft.AspNetCore.Hosting.Diagnostics[6]

Application startup exception

System.ArgumentException: Format of the initialization string does not conform to specification starting at index 194.

at System.Data.Common.DbConnectionOptions.GetKeyValuePair(String connectionString, Int32 currentPosition, StringBuilder buffer, Boolean useOdbcRules, String& keyname, String& keyvalue)

Along, of course, with many more debug lines.

Is this a bug of the latest version, or is there something else going on?

I fixed it as described in the post below:

1 Like