the fight for greater security is an ongoing war.

    people today (especially post-9/11) have an odd fascination with divulging their personal information. most believe that it may offer them a sense of security by giving up their privacy. for instance, telling the internet that the ice cream shop you went to last week would give you an alibi to where you were last week. on the downside, we now know where you were last week. without referring to the terrible fallacy of "nothing to hide, nothing to fear", there seems to be no worries in telling people everything about you. in the age of social media, you're actually rewarded for it! by associating peoples' viewership & fake internet points (likes, karma, etc.) to a dopamine response, the average internet user's brain is wired to give out new personal information if it means greater acceptance by ones' peers (especially strangers).


    privacy suppression seems to have certain limits for the average end user, though. nobody seems to want to give out certain things such as passwords, and understandably so! after all - as outdated as such a system is - passwords are the key to unlocking extremely sensitive information about you and your family! from bank account information to medical records, passwords are the way to see it all. the average end user seems to not realize fully how important password management can be though, and loves to use the same password for everything - not even thinking to change them every so often. all that precious, sensitive data...


    the slightly-above-average end user is better than this! they change up their passwords once a month, and write all the different passwords on a sheet of paper they keep in a locked drawer of their desk. this type of user knows they need to fully keep up with the digital age though, and seeks a solution more secure than keeping their passwords in plaintext on paper. after carefully avoiding password-managing browser extensions (despite the convenience), they settle on a modern desktop-based password manager program. they put all their passwords into it, matching with the associated services. now they only need one "master password", and they can access it all!


    there's a problem though - this password manager (like the majority out there) is backdoored. if the police, federal agents, creator of the program, etc. want access to this user's passwords, they simply can. yes, the passwords are technically encrypted, but there's ways around it. the backdoor was put in place just in case of anything from breaking the law to general surveillance, or if the developer wants to sell the associated data for a few bucks. while this user's passwords may be private [to a degree], they certainly aren't in a secure place.


    this sort of situation leads to a terrible dilemma. you may have "true" encryption - no backdoors, but external entities can't break into things encrypted by it. what if a terrorist has bomb targets in a text file encrypted using a true encryption scheme? you may also have "typical" encryption - inaccesible to most users, but has a way around it; a backdoor. this way federal police can access such a text file, at the risk of a bad actor being able to exploit the backdoor and access the data of any other user that utilizes the encryption scheme. in short - do you allow for an encryption scheme that is so secure that nobody can break into it?


    i suppose i'll give my answer to this dilemma, or at least the first part of my response. freedom isn't safe. while an extreme minority may utilize a true encryption scheme to cover malicious activity, steps can be taken outside of this to gain sensitive information regarding the target and/or their origin. you don't have to compromise mass security to have a chance at temporary safety for a small percentage of people.


    now we come to an end user whose skills are triple that of the aforementioned user. this is the average poweruser. this is the type of user that seldom uses their mouse, has every possible keyboard shortcut memorized across all sorts of systems, and self-hosts their e-mail on a computer they got for $5 at a thrift store. this particular end user knows the flaws of password managers. they know that centralizing anything on a single machine only makes a greater prize for ne'er-do-wells. the poweruser memorizes their passwords - that they change up every two weeks or so - using simple mental tricks. no backdoors, as many governments - especially the CIA - failed to perform passive gaining-of-data from the mind. nobody can force this user to give them anything, not that anyone would.


    typically to unlock a truly-encrypted entity, a password of some kind (text-based, PIN-based, fingerprint, etc.) is given. this gives the end user ultimate control over stored data, such as passwords. as this is a true encryption scheme, external entities cannot get in without letting the end user give them access. when the question of unlocking something per the request of an entity such as [federal] police under a truly-encrypted scheme comes into play, there may be roadblocks against it. thankfully in the US, we have the 5th amendment - the right to not self-incriminate. in other countries though, this isn't considered a human right. for instance, ireland recently passed a law where not giving up a password to such a scheme is deemed a criminal offense worth a €5,000 and up to a year in prison. a few years ago, a man in london was officially deemed a terrorist due to not giving away passwords to border security at an airport, and given a £620 fine.


    we re-visit the poweruser. as they can now be tried in a criminal court for potentially not self-incriminating based on the country he's in (even if they did nothing wrong), how can they safely store their passwords in a manner that cannot be accessed by anyone else, that doesn't require a master password to get in, that they can easily access? unfortunately, i don't believe the technology is there yet. however, there are promising pathways to such a feat.


    consider decentralized technology. torrenting, IPFS, I2P, etc. this sort of tech allows for sharing of data without a centralized source to target. it has provably faster upload/download times, can work outside of the internet in some cases, and can utilize encryption at its most core level, such as with blockchain technology. as a way of storing secure data, storing the data in a striped manner within a decentralized protocol may prove fashionable. this way, no central user/server has access to all the data that can make up a single thing. for the typical user, a setup like this (that doesn't depend on a WAN) proves difficult-to-use. in the case of power outage or loss of WAN access, this is an impractical method of storage - which may or may not even be encrypted!


    alright, so let's reiterate things. we know using the same method-of-entry (MoE) isn't secure, so we need to change it up frequently if it's to be used. centralizing a MoE digitally may prove effective, given the encryption scheme used is true and perfect. the fault to this method appears to be the same as memorizing such keys, as external entities can punish those for not allowing them complete entry - no matter how malicious. a decentralized system might be a preferred option, but it is imperfect, is minimally convenient, and isn't necessarily encrypted.


    out of the methods of secure data storage here, it seems that memorizing them is the best possible way of doing things - at the potential cost of your entire life. there's something interesting to question here though - how does our brain even store anything? we don't exactly have dedicated cells for memory, nor are our brains laid out like a typical non-biological computer. we don't have separate sections of cells for task-specific processes in the same sense as something like a laptop. for long-term storage in the brain to occur, certain neurons will fire in a certain order. the more times fired, the stronger the pathway, and the more clear the memory. the only way a memory is "deleted" is if the cells themselves die, resulting in an incomplete (or even destroyed) path.


    for decades, research has been conducted regarding theoretical (and as of recent, applied) artifical implementations of a mammalian brain - also known as artificial neural networks (ANNs). memories are learned in a similar fashion, where certain paths are trained to learn certain memories. while this is a fantastic leap towards copying how our brains may function, the data itself is still stored in a rather centralized manner (even if we stripe it, as discussed above). the human brain doesn't centralize the data itself anywhere, and instead represents things as trained neural pathways.


    i suppose i should plainly ask my question if you haven't figured it out already - when will we be able to securely store data within a neural network in a way that acts as our brains do, within the conceptual outline of true encryption? consider a neural network where a password is stored within it as a series of trained neurons with certain weights, where the neurons can be used for other purposes. an external trigger causes the weights to automatically adjust for the context, giving the user the password as requested, without the need for a MoE. i propose to fellow security & ANN enthusiasts we research this thoroughly, as this could mean an entirely new era of both security & privacy.