Bitwarden lookup plugin for Ansible causes panick errors and "A worker was found in a dead state"

I’m using the lookup plugin like this

admin_user: "{{ lookup('bitwarden.secrets.lookup', 'da63147', base_url='') }}"

Ansible fails with this error:

Parsing secret ID
Validating field argument
Parsing Bitwarden environment URL
secret_id: da63aa012b1147
field: value
state_file_dir: None
Authenticating with Bitwarden
Parsing secret ID
Validating field argument
Parsing Bitwarden environment URL
secret_id: 2b49d959b8274
thread 'field: value
<unnamed>' panicked at /Users/runner/.cargo/registry/src/
called `Result::unwrap()` on an `Err` value: SetLoggerError(())
stack backtrace:
state_file_dir: None
Authenticating with Bitwarden
   0:        0x111262770 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h958f6e6821e9b0fb
   1:        0x11107f4a3 - core::fmt::write::hb5e3c29742bab55e
   2:        0x111260302 - std::io::Write::write_fmt::h3f38404afa442946
   3:        0x111262509 - std::sys_common::backtrace::print::h1ce04ba6121a0174
   4:        0x111263a75 - std::panicking::default_hook::{{closure}}::hf2e5fe71523bcace
   5:        0x1112637d2 - std::panicking::default_hook::h4f8cdc98d2dcc8c0
   6:        0x1112640ff - std::panicking::rust_panic_with_hook::h28420d44f043d3a5
   7:        0x111263e9e - std::panicking::begin_panic_handler::{{closure}}::h6f886c0e89185cdc
   8:        0x111262c59 - std::sys_common::backtrace::__rust_end_short_backtrace::h7ca2b2ff22d46410
   9:        0x111263c02 - _rust_begin_unwind
  10:        0x1112dd155 - core::panicking::panic_fmt::h52dad7a658d9bf41
  11:        0x1112dd655 - core::result::unwrap_failed::h84bbbea2e5d8da9f
  12:        0x1110704ac - bitwarden_py::client::_::<impl pyo3::impl_::pyclass::PyMethods<bitwarden_py::client::BitwardenClient> for pyo3::impl_::pyclass::PyClassImplCollector<bitwarden_py::client::BitwardenClient>>::py_methods::ITEMS::trampoline::h5cc7822e09d93461
  13:        0x10deb1b54 - _type_call
  14:        0x10de55d25 - __PyObject_MakeTpCall
  15:        0x10df2f315 - __PyEval_EvalFrameDefault
  16:        0x10df33c13 - __PyEval_Vector
  17:        0x10de560dd - __PyObject_FastCallDictTstate
  18:        0x10deb99c4 - _slot_tp_init
  19:        0x10deb1b8d - _type_call
  20:        0x10de55d25 - __PyObject_MakeTpCall
  21:        0x10df2f315 - __PyEval_EvalFrameDefault
  22:        0x10df33c13 - __PyEval_Vector
  23:        0x10de59361 - _method_vectorcall
  24:        0x10de56824 - __PyVectorcall_Call
  25:        0x10df31453 - __PyEval_EvalFrameDefault
  26:        0x10df33c13 - __PyEval_Vector
  27:        0x10de59361 - _method_vectorcall
  28:        0x10de56824 - __PyVectorcall_Call
  29:        0x10df31453 - __PyEval_EvalFrameDefault
  30:        0x10df33c13 - __PyEval_Vector
  31:        0x10de59361 - _method_vectorcall
  32:        0x10de56824 - __PyVectorcall_Call
  33:        0x10df31453 - __PyEval_EvalFrameDefault
  34:        0x10de6b750 - _gen_send_ex2
  35:        0x10de6b5d9 - _gen_iternext
  36:        0x10dfd0c18 - _islice_next
  37:        0x10de7869b - _list_extend
  38:        0x10de7840e - _list_vectorcall
  39:        0x10df30305 - __PyEval_EvalFrameDefault
  40:        0x10df33c13 - __PyEval_Vector
  41:        0x10debcf09 - _vectorcall_method
  42:        0x10debc58e - _slot_mp_subscript
  43:        0x10df25876 - __PyEval_EvalFrameDefault
  44:        0x10de6b750 - _gen_send_ex2
  45:        0x10de6b5d9 - _gen_iternext
  46:        0x10dfd0c18 - _islice_next
  47:        0x10de7869b - _list_extend
  48:        0x10de7840e - _list_vectorcall
  49:        0x10df30305 - __PyEval_EvalFrameDefault
  50:        0x10df33c13 - __PyEval_Vector
  51:        0x10debcf09 - _vectorcall_method
  52:        0x10debc58e - _slot_mp_subscript
  53:        0x10df25876 - __PyEval_EvalFrameDefault
  54:        0x10de6b750 - _gen_send_ex2
  55:        0x10de6b5d9 - _gen_iternext
  56:        0x10dfd0c18 - _islice_next
  57:        0x10de7869b - _list_extend
  58:        0x10de7840e - _list_vectorcall
  59:        0x10df2f18d - __PyEval_EvalFrameDefault
  60:        0x10df33c13 - __PyEval_Vector
  61:        0x10de560dd - __PyObject_FastCallDictTstate
  62:        0x10deb99c4 - _slot_tp_init
  63:        0x10deb1b8d - _type_call
  64:        0x10de55d25 - __PyObject_MakeTpCall
  65:        0x10df2f315 - __PyEval_EvalFrameDefault
  66:        0x10df231eb - _PyEval_EvalCode
  67:        0x10df7d388 - _run_eval_code_obj
  68:        0x10df7d318 - _run_mod
  69:        0x10df7d1a5 - _pyrun_file
  70:        0x10df7ccb3 - __PyRun_SimpleFileObject
  71:        0x10df7c676 - __PyRun_AnyFileObject
  72:        0x10df98760 - _pymain_run_file_obj
  73:        0x10df98158 - _pymain_run_file
  74:        0x10df97b48 - _Py_RunMain
  75:        0x10df98aea - _Py_BytesMain
  76:     0x7ff812faa366 - <unknown>
ERROR! A worker was found in a dead state

This is the task producing the error:

- name: "Merge service config and service defaults"
    service_cfg: "{{ service_base_defaults | ansible.builtin.combine(service_defaults, all_service_defaults, service_cfg, recursive=true) }}"
  tags: [ always ]

Before implementing the bitwarden lookup as above this same playbook was working correctly. The env variable were in plain text under service_cfg.

Several playbooks containing the same task (and bitwarden lookup as well) finished without error, but other playbooks fail. I can’t really encounter any difference.


I found out that the problematic playbooks include more than one bitwarden.secrets.lookup’s in the dictionary service_cfg. Whereas the playbooks which pass without errors would have only a single bitwarden.secrets.lookup entry. So I suppose simultaneous calls of the plugin produce the error. Is there any option to introduce a wait time? Other than defining a task for each lookup and a pause task between them?

I’m having the same issue. Were you ever able to solve this?

They claim to have fixed it here: [SM-1147] Switch to try_init with pyo3_log by coltonhurst · Pull Request #676 · bitwarden/sdk · GitHub
but I haven’t tried it yet.

1 Like

Maybe I didn’t compile correctly the fix or the fix doesn’t help.
This is what I tried:

I did cargo build in crates/bws as in the newest state of the main branch and then cargo install --path ., but what next? The lookup plugin still fails with a panic error as before.

Its been a month since the fix was committed and I feel like asking for a bugfix release wouldn’t be unreasonable - does anyone know how we should do that?
It doesn’t feel like a bug report but its also not a feature request.

This might not be a show stopper for most users but for any Ansible bitwarden users who template two secrets in to one file it is a blocker - one I only discovered as a result of actively moving all secrets in to BW Secrets Manager.


I encountered the same problem. As a workaround you can set a fact and afterwards it should work (at least in my case).

- set_fact:
    v1: "{{ v1 }}"

- set_fact:
    v2: "{{ v2 }}"

Where v1 and v2 are two variables that use the lookup function.

As mentioned as a response to your question at A new release of the bitwarden-sdk python package is needed. This seems to be a straightforward action after the bug has been fixed in rust. Lets hope it doesn’t take too long, since this is a real blocker for ansible users.

OK, I succesfully build the current state of

cargo build
npm install
npm run schemas
cd languages/python
maturin build

Then I replaced the packages bitwarden_py and bitwarden_sdk from target/wheel/… to my venv of ansible. Afterwards, ansible can finally access multiple secrets.

So, dear bitwarden crew, would you kindly publish a new bitwarden-sdk python package? :wink: