Import from SplashData's Gpass record exports in CSV

It’s not obvious which format using CSV to choose when importing from SplashData’s Gpass. If you try the SplashID (csv) importer, a bunch of improper folders will be created and your data will not be properly imported.

The export I’m dealing with ([email protected]_records_2024-06-20 00 54 40.csv) has a first line of field names of description,note,Field0,Type0,Value0,Field1,Type1,Value1, …, Field9,Type9,Value9,Field10,Type10,Value10,Field11,Type11,Value11, and all other lines of the form "Bitwarden Community Forums","Notes field content",URL,text,community.bitwarden.com,Username,text,[email protected],Password,text,"Hello ""Bitwarden""!", where non-present (field, type, value) tuples are omitted. Note that Gpass supports folders, but this export does not use them. The previous entry example was for a Gpass Web Login; a Gpass Credit Card entry is of the form: "Bitwarden Credit Union",,Number,text,1234567891234567,Expiration,text,01/99,"Customer Service Number",text,"212-555-0123",CVV,text,123. Note the lack of an explicit item type marker. Note that field names in Gpass seem to be pretty freeform, so you may want to test what Gpass requires and/or automatically respects for items. (I don’t use Gpass, so I can’t provide information there.)

My reading of the splashid-csv-importer.ts code is that it is expecting the CSV columns to be:

  1. Item type (e.g. “Web Logins”)
  2. Username
  3. Password
  4. Website URI
  5. (and all remaining non-empty columns) converted to Notes text.

The export format you have described will not be parseable by the current importer code. Perhaps the SplashID export format has changed since Bitwarden’s importer was implemented in 2018.

AFAIK, SplashID and Gpass have separate code bases, so it’s very plausible that the export formats are completely separate by now.

Unless you want to code your own converter, you could potentially accomplish the Bitwarden import by sorting on Column #3 (Field0), then splitting the .csv into multiple files depending on the type of item (inferred from the value of Field0; e.g., URL→login item, Number→card item, etc.), and finally, conditioning each separate .csv thus created. However, this approach assumes that the field/type/value triplets appear in a consistent order for a given item type, something I am unable to confirm.