Modal Title
Data / Frontend Development / Tech Life

Dealing with Death: Social Networks and Modes of Access

Social platforms need to provide their users with more access schemes, while developers need to experiment with innovative access styles.
Jun 3rd, 2023 6:00am by
Featued image for: Dealing with Death: Social Networks and Modes of Access
Image via Unsplash

One increasingly common problem faced by social networks is what to do about death. Getting access to an account of a deceased friend or relative usually has at least three parts, depending on the territory:

  1. Get a copy of the death certificate;
  2. Get a letter of testamentary (both tech companies and financial institutions will request that you not only prove that the person is dead but also that you have a legal right to access their accounts.)
  3. Reach out to the platform.

This is all quite unreasonable, just to put a sticky note on the avatar page explaining why the deceased user is no longer responding. Waiting for a death certificate and other lawyerly speed activities just adds to the misery. Social media companies are not (and don’t want to be) secondary recorders of deaths; indeed we know that accounts regularly represent entities that were never alive in the first place.

What is really missing here, and what this article looks at, are different modes of access, as part of a fully functional platform. Designers need to create alternative and systematic access methods that help solve existing scenarios without having to hack their own systems.

The Case for Backdoors

The focus on security has unbalanced digital fortresses that now regard their own users’ accounts as potential risks. The term backdoor was intended to imply an alternative access route, but now simply means something to be boarded up tight at the next patch, before a security inquest. This has the unfortunate consequence of limiting the options for users.

In the early days of computing, when software was still distributed by floppy disks, people updated their applications a lot less, and alternative access to fix errors or make minor changes was quite normal. Magazines were full of cheats, hacks and hints. Some authorised, some not. Before the full suite of integrated testing became available, backdoors were often added by developers to test certain scenarios for an application. Today, we are no longer encouraged to think that we own running software at all, and that has changed how we think about accessing it.

In the example of a deceased user of a social media platform, the most straightforward solution is for a third-party legal company to hold a key in escrow. That company would then be charged with communicating with concerned humans. However, the ‘key’ would not allow a general login — it would purely be used to suspend an account, or to insert a generic account epitaph. So the third party concentrates on its role of soberly talking to friends, relatives or possibly other lawyers, while the platform can just maintain its services. (And yes, that could also mean police services could halt an account without having to negotiate with the social media company.) The agreement could be required to be set up after the account had crossed a size or time alive threshold. From a development point of view, the special access would need to be detected, and a confirmation that the account had indeed be suspiciously quiet.

Launching a Nuke

You may have seen the familiar dramatic film device where two people have to turn their keys to launch a nuclear missile, or open a safe. It is a trope even used by Fortnite.

From RFE/RL

The two-man rule is a real control mechanism designed to achieve a high level of security for critical operations. Access requires the presence of two or more authorised people. If we just step back a bit, it is just a multi-person access agreement. Could this be useful elsewhere?

Returning to examples on social media, I’ve seen a number of times when a friend has said something relatively innocent on Twitter, stepped on a plane, only to turn his network back on to discover a tweet that has become controversial. What if his friends could temporarily hide the tweet? Like the missile launch, it would need two of more trusted users to act together. Again, the point here is to envision alternative access methods that could be coded against. Given that the idea is to help the user while they are temporarily incapacitated, the user can immediately flip any action simply by logging back on.

The only extra required concept here is the definition of a set of trusted friendly accounts, any of whom the user may feel “has their back.” In real life this is pretty normal, even though we still envision social media accounts as existing in a different time and space. In fact, you might imagine that a user who can’t trust any other accounts probably isn’t suitable to be on social media.

Implementing this concept would require defining a time period after which a friendly intervention could be considered, and a way to check that the required quorum triggered the intervention at roughly the same time. One imagines that once you become a designated friend of another user account, the option to signal concern would appear somewhere in the settings of their app. This is certainly a more complex set of things to check than standard access, and it could well produce its own problems in time.

Both using a third party escrow key, or relying on a group of friendly accounts defines a three-way trust system that should be a familiar way to distribute responsibility. This is how a bank, a merchant and a buyer complete a purchase transaction. Testing these systems is similar in nature. First acknowledge the identity of the parties, then confirm that they have permission to perform the action, and finally confirm the action is appropriate at the time.

Negative Intervention

A natural variation on a third party intervention where the authorised user is incapacitated, is where a third party wants to stop an account because they think it has been hacked or stolen. The obvious difference here is that the current user cannot be allowed to simply cancel the action. Social media companies may close a suspicious account down eventually, but there doesn’t seem to be a systematic way to do this independently by users.

This is a harder scenario to implement, as it needs a way for the authentic user to resolve the situation one way or another. Social media companies do, of course, keep alternative contact details for their users. Hence the user could signal that all is well; the account really has been taken; or the account was taken but has now been recovered. But until that happens, the account is in a slightly strange state — under suspicion, yet not officially so. Should the account be trusted? Perhaps the friends themselves are not themselves?

Get Back In

If you feel the examples above are odd, you shouldn’t. They are really just extensions of what happens when, in real life, you lock yourself out of your home and fetch a spare key from your neighbour — or ask the police not to arrest you when you smash your own window to get back in. While platforms need to regard their users with less suspicion and provide more access schemes, developers also need to experiment with innovative access styles. (Actual security breaches are often caused by disgruntled staff selling sensitive data.)

There is no question that AI could help make certain assessments — the things that have been mentioned throughout this article. Is an account acting suspiciously? Has it been quiet longer than usual? Has a two-man rule been activated? Orchestration of edge case scenarios is something that AI might well be successful with, as well.

Maybe with the help of GPT and more experimentation, users may find that recovery from uncommon but unfortunate scenarios will be less fraught in the future.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.