Hi,
what do you think about “dead sites” report – there should be collected all the service which are offline for more, let’s say then 2 weeks?
Hi,
what do you think about “dead sites” report – there should be collected all the service which are offline for more, let’s say then 2 weeks?
I’m quite sure it’s rather hard to check for such pages/websites.
You’d need to use something like https://downforeveryoneorjustme.com/ but it would be extremely hard to automatically detect when website is gone for good.
Why would you need it anyways?
Good question – maybe just to keep database clean?
Keeping things tidy would be nice. I have recently migrated a ton of data (over 700 items) from 1Password to Bitwarden and in cleaning them up I’ve found at least several that are for services that no longer even operate. But this is only something I’m finding because I’m spending hours and hours combing through my vault to make sure it’s accurate, which of course I don’t want to always do once I’ve confirmed my data was successfully imported from 1Password in all it’s glory.
While I can’t quite code it myself, I can’t imagine it’d be difficult to implement. I’m sure logic would be used so that it checks once a day, for example, and after 5 consecutive attempts and no successes, it marks it as “Possibly Offline” or something like that.
And of course, the onus would still be on the user to confirm/verify if this is accurate, just like the onus is on us for all other reports such as the Missing 2FA one where it sometimes reports 2FA for items but the 2FA doesn’t actually exist and the upstream provider needs to be updated to reflect that (or it isn’t available in certain countries, for example). In other words, we can’t expect reports to be 100% accurate all the time, we have to ensure the data provided to us is accurate and validate it ourselves. The point of such reports is to quickly save time rather than combing through every item individually to check for things like 2FA or weak passwords, etc.
couldn’t have said it better myself.
The temptation in programming is to make things as easy to automate as possible. But somethings don’t have to be automated completely. I would in fact require any system like this to work through human validation. The alternative would be reaaaaal bad.
Also @DustinDauncey dont’ forget to vote if you like this idea.
Thank you! And great catch, I just voted for it now.
fwiw, I use a python program called PyFunceble https://github.com/funilrys/PyFunceble to do something similar for dead hosts checking in hosts files. I’m pretty sure it checks domains with a combination of nslookups and whois checks. On a list of, say, 700 domains it might take 30 minutes or so to check all of them.
And moreover, this can be controlled only on endpoint level, not central server.
Agreed. And maybe just done as you view one entry at a time or something. I also thought it would be cool if a password manager did something to verify a site’s SSL certificate so you know when a domain expired but someone else made a domain parked site in its place with a fake login prompt.
Like many other long time users here, I started using Roboform long back and then moved over to lastpass for over several years before finally moving to bitwarden. I have many websites which are quite outdated and no longer in service. I wish there was a simple report to see which websites are no longer valid for me to easily remove them from the vault.
Can we please have a DNS report against the sites to check if there is a valid DNS record? That might be the easiest and most reliable. I really can’t go through my over 500 entries to verify which all are still valid. Many old logins I have might be dead websites.
DNS Report might be the best solution. Kindly implement it.
I would like a Dead URL Report. Show me which logins link to dead or bad URL.
Why? A couple reasons (but certainly not all of them):
I may not catch every change made by 3rd party websites or network admins.
People don’t have the time personally monitor dozens of paths. Instead they get interrupted by emails, calls and support tickets. If BW ran a report on bad URLs (on demand or schedule), it would help me put out fires before they impact users.
In fact, as Admin, I’d like to get an alert when a user/team member has an unsuccessful URLs or network path incident.
If you are in IT, I’m betting you run reports on network logins and access. This is not really that different.
There is actually a very real security issue that this type of functionality can help resolve.
Let’s say a user has a login stored for example.com. Now let’s say they don’t login for a while. During that time, the site becomes defunct. Either right away, or after a while, a scam operation buys the unused domain name (which is very common) and sets up a scam site. When the user visits the site again, Bitwarden will provide the scam site with the user’s credentials.
If the proposed functionality detects sites that are currently defunct, it could alert the user to this possible situation. If the proposed functionality also detects sites that are under new ownership (or have other significant DNS changes), Bitwarden can alert the user to additional concerning situations.
This is exactly why one should not reuse credentials. Each web site should have a unique password. Although the bad actor may get your password for the defunct example.com, that is only a big deal if you have also used somewhere else.
Yes, of course, but credentials often include the user’s email address. So this gives the scam site knowledge of the email address and that it was used on a site that served a particular function. Such information can be useful in many types of attacks and scams, including phishing.
No need to bloat the SW for everyone else.
Yeah, I’d even pay for a Premium level if they could incorporate some sort of a URL / HTTP status check for all the URLs in the Vault, and allow us to sort them by HTTP status codes, bulk check and delete the 404s and such, as in a list of 4000+ URLs I’m pretty sure most of them would result in 404s and 500 errors.
But checking them all manually, one by one… yeah, that would take months.