Argon2id settings. Higher values = better?

Hello guys!
I have switched to Argon2id and I wanted to tweak it a little.
Could anyone tell me what makes it harder to attack?
Does a higher value of KDF iterations make it harder to attack?
Does a higher value of KDF parallelism make it harder to attack?
The only value I understand is the memory, which makes it harder to attack if it’s higher.
I know that it might be unnecessary to increase the default values, but I just want to know what they really mean.
Thanks :slight_smile:

@Jirka_xxgmxx Welcome to the forum!

Increasing iterations definitely makes the master password harder to crack, but it also slows down the time it takes for you to log in or unlock your vault.

The effects of increasing parallelism is harder to understand. You can read some discussion in previous threads:

 

I think the main consensus is that if you increase parallelism, you should also increase memory. One way to think about it is that your main goal should be to increase memory; by increasing parallelism, you may be able to further increase the memory setting without unduly slowing down your own login/unlock experience.

1 Like

Following on from @grb’s comment, it appears that you can increase security pretty much equivalently by increasing either of iterations or memory, at a similar time cost – an indication they have similar security effect.

Parallelism, I just set to default for available processors on my least capable device. For my part I have been unable to find significant advantage or disadvantage by diverging from the Argon recommendation of 2 x cores so I accept it as is.

While confessing that I too use beyond-standard settings, it is hardly required for users not in the spook business. Standard settings are very highly secure. However, if you have the processing grunt to not be bothered by delays, then increasing settings becomes pretty much cost free whether it really helps or not.

Anyone using Bitwarden competently is far more likely to be attacked by means other than directly breaking your passwords. Adding a word to your pass phrase is also highly effective and does not slow anything down but your typing.

To the defender, increasing memory or increasing iterations has a similar effect in terms of login/unlock delay. However, for an attacker, an increase in memory is generally more costly.

I was using iteration 10, memory 1000MB, and pareralism 6 for a long time with no issue. The chrome extension was little slow, not going to lie, but the iPhone logged in instantly.
After few years I got onto a point, when the iPhone app update started to warn me about the memory being set high and stopped the autofill, but I could still log into the app directly.

Going back from 1000 to 50mb is quite a “downgrade” in security :grinning:. I understand, that the memory is a hard limit for some devices, and the other values just makes it slower. From the threads I just read, the parallelism 1 seems like the hardest setting, because it limits the ability to multicore, if I understand it correctly.

@grb, do you have a source for that view please? It is the case that PBKDF2 suffers the problem you describe.

Argon2id operates differently. Argon2id is memory-hard in terms of memory access and again in terms of iterations because the next pass, having exactly the same memory cost as the prior, cannot commence without completion of the prior. That is, both memory and iterations are linearly effective. This is well discussed on cryptoexchange and in Argon2id papers. It is illustrated by OWASP recommendations as follows:

Rather than a simple work factor like other algorithms, Argon2id has three different parameters that can be configured: the base minimum of the minimum memory size (m), the minimum number of iterations (t), and the degree of parallelism (p). We recommend the following configuration settings:

  • m=47104 (46 MiB), t=1, p=1 (Do not use with Argon2i)
  • m=19456 (19 MiB), t=2, p=1 (Do not use with Argon2i)
  • m=12288 (12 MiB), t=3, p=1
  • m=9216 (9 MiB), t=4, p=1
  • m=7168 (7 MiB), t=5, p=1

These configuration settings provide an equal level of defense, and the only difference is a trade off between CPU and RAM usage.

For equivalent protection against attack, m and t are inversely proportional, as shown in practical recommendations by OWASP as well as by papers and discussion on the method. The balance depends a bit on where you are best equipped, maximising both RAM and CPU in terms of what you have available.

It is also argued and presented by some that increasing p without modifying other factors itself increases security, and this might be inferred from OWASP given no trade-off is described for p. I have not seen the same clear evidence for that as for m and t, so go with the standard recommendation for p = 2 x cores.

The source of the OWASP recommendations (Steve Thomas a.k.a. sc00bz) notes that the values used by OWASP are “for low memory usage (≲64 MiB)”, and that the results are “based on…memory bandwidth” (which I interpret to mean that the cost will become limited by available memory instead of bandwidth when the memory setting is higher than 64 MiB).

The formula given for the bandwidth-limited case is that the memory lower bound should be selected according to the following formula (when p=1):

m≥93,750/(3*t-1)*α

Although I suspect that there may be a missing set of parentheses in the above equation, and that the formula is actually supposed to be

m ≥ 93,750/((3 t – 1)×α)

No definition is provided for the parameter α, but it is stated that for m>64 MiB, “α drops proportional to memory increase”.

Furthermore, the authors of the Argon2 algorithm have explained that the goal of Argon2 is to make a memory-hard algorithm, which is defined as a having a superpolynomial time-space tradeoff relationship (see Slide #16 in this presentation). My understanding of this (which is a bit shaky, to be honest) is that time is not inversely proportional to memory (in the sense that it is not linearly proportional to the inverse of memory, so that the time×memory product is not constant).

In contrast, iterations clearly have a direct (linearly) proportional relationship to both the attacker’s cracking time and the defender’s login/unlock delay time. So I conclude that the effects of memory and iterations are not equivalent.

Nonetheless, I am open to changing my mind upon further clarification of these topics.

Hi @grb, I am aware of Steve’s formula, though I do not recall seeing other similar commentary. The question of memory hardness is the time-area product so, where k is the product, we know convincingly that k is linearly proportional to m, and the question is the value of x where k is also proportional to x * t.

Basically, I have suggested the value of x is approximately 1 while Steve Thomas indicates 3t - 1 as a divider but unfortunately does not provide the basis for it (his link on it went nowhere for me)

I shall have some time to read the presentation this evening, and have also a possible direct guide from discussion of attacks in the RFC. I think it would be invaluable if we could give people newly asking (as they will again) a consensus view on the parameter variation, or a method for determining their own. That reasonably grounded consensus is my aim.