The foundation of modern day product development process known as design thinking starts with a question: “What is the problem that users are trying to solve, and how can my product be useful and meaningful to users in solving that particular challenge?”
This line of thinking has been the impetus for making both digital and physical products become more consumer-friendly. “There is no such thing as user error” is regularly quoted in today’s product management and design classes as product designers strive to make difficult technical tasks as simple as clicking a button or flipping a switch.
There’s no doubt that this design philosophy has made many aspects of life easier. Today, all I need to do is press a few buttons on my smartphone to get a car that will drive me anywhere and I don’t even need to pull out my wallet. By layering more and more layers of technology, smart algorithms, and artificial intelligence onto daily logistical problems, we’ve outsourced many of the mundane tasks of life to our digital assistants.
But people who practice design thinking in building and improving products aren’t in the business of making things easier because it’s good for society. In a corporate context, designers often are forced to solve business problems over user needs when the two come in conflict, creating products that are arguably more “business focused” than “user focused”. And while design thinking proponents will say that good products are where feasibility, desirability, and viability all meet, desirability usually has the first say and feasibility the last.
Handing over daily tasks to software running on the internet means that we have to give up our data to the code and the technology companies that are now increasingly running our lives. The model of many tech startups today is to offer a free app, collect as much data as possible about users and how they use the app, then use that data to monetize by partnering with advertisers. As a saying goes, “if you’re not paying for the product, you are the product.”
On a simple level, Google’s ability to give free search, email, and storage to everyone is because they sell pieces of the results you see to advertisers. On a more complex level, the logic in the system is so complex that not a single engineer at Google could explain how the entire thing works. Data like your physical location, your search history, your browsing history, and your communications all contain thousands of pieces of data and metadata that gets routed around the internet to anyone who is willing to pay for it.
But for individuals that prefer not give up their privacy, there currently isn’t much of an alternative. Simply deleting your gmail account is not sufficient in preventing Google from collecting data on you because anytime you receive an email from someone using a gmail account, google has access to that email. In fact, any website that loads any assets from Google will give Google knowledge of who you are. In this world, there simply is no way to remain in control of your own personal data.
Consumer Paywalls vs Free Software
As a thought experiment, let’s consider what would happen if companies such as Google and Facebook provided a subscription tier: You would pay a certain amount of money a month (probably higher than your monthly customer value), and they would promise to show you no ads, not track you across the internet, and give you full control over what data Facebook collects, and how it stores and manages the data. As a baseline, this kind of system would allow companies to be more transparent with their customers, and customers would have a more significant opinion in the products and features.
Unfortunately, this model rarely works at scale. The digital products whose business models are based around privacy and security are far from household names, and not easily discoverable. And while these products may be sustainable and even profitable, a security focused email solution like ProtonMail will simply never have the market share of gmail because of the fact that gmail is free for everyone, offered by Google, and used by most of the world.
This is even more true for products that are based on some kind of network effect. Despite the fact that a small subset of people may personally be willing to hypothetically pay for a secure messaging platform, not all of my friends may feel the same. In fact, it’s difficult enough to convince many of my friends to switch to Signal, the free encrypted messaging app. This is because network effects have an extremely powerful effect on certain software applications that form a networked walled garden.
The problem with digital invasions of privacy isn’t that there aren’t ways of protecting your private information online – there are plenty of ways to do that. The problem is that companies are incentivized against providing sensible privacy defaults for how they collect, store, and monetize your digital information. This means that the average internet user has no idea how their data is being used and how to secure themselves online.
Regulations vs Privacy
Well, if the incentives of a capitalist market has no solution for providing privacy to digital information, what if we regulated data like we do with environmental pollutants? Because there is a similar disincentive to reduce environmental pollutants, might regulations make sense in protecting digital data?
However, trying to protect privacy through regulations only incentivizes compliance, not results. That said, if the regulations are properly written, enforced, and regulated, compliance can result in better security. Unfortunately, many of the people who have the ability to write and pass regulations on internet privacy don’t understand enough about how things are defended and breached.
If we look at some regulatory examples of digital data, the EU maintains some of the strictest regulations regarding personal data. For example, websites in the EU are required to obtain permission from the user before they are allowed to collect any information from the user’s device. Privacy principles such as the right to your data at anytime, the right to be forgotten, making data private by default, and making companies take responsibility for the risks of using the data are guiding principles for how software is built.
And while similar regulations already currently exist in the US in addition to some regulations specific to minors such as the Children’s Online Privacy Protection Act, regulations are necessary but insufficient. Because regulations must be written prescriptively but measured descriptively, it can be difficult to determine what satisfies the regulations and what doesn’t. Adding to the fact that technology changes far faster than any regulations are likely able to keep up, it may be too late by the time a regulatory agency comes around to do an audit on a startup that has already changed it’s product three times.
Nation States vs Federal Regulations
Additionally, unlike environmental regulations, federal governments also have a vested interest in collecting private information from everyday users. If we consider that cyber-espionage in the name of “national security” is something that federal governments are disproportionately incentivized to do, they want ways of intercepting and monitoring private data that is transferred across the internet.
State actors often have the largest amount of resources to track individuals that corporations are unlikely to do. However, state actors operate by manipulating / exploiting products built by companies, whether it be hacking routers, collecting data off of an iPhone, colluding with a device manufacturer, or sending Facebook a gag order.
Under these circumstances, it’s up to people to have reasonable, informed debates on how governments should be allowed to collect data from citizens in order to protect citizens.
Security vs Privacy
Unfortunately, even our best attempts at privacy are often undermined by misplaced incentive structures causing us to build insecure technology. The agile framework, overwhelmingly used to develop software in our modern day, rewards quick iterations and prototyping to launch new products and services. As startups scramble to launch minimum viable products, security is generally not something that is paid much attention to. Architecture and infrastructure changes so quickly that we bet on a “security by living on the bleeding edge” method of securing our software products.
While this launch first, patch later approach may help us increase the speed of our software development cycles, it doesn’t give us the bandwidth to look at our code from different angles, defending against ways it might be abused for maliciously. Breaches of software databases, internet accounts, and even our physical devices all are done with the intention of violating privacy for the gain of the attacker.
Thus, security at all levels must be considered carefully and designed into our development processes. While I don’t think that developing secure software is mutually exclusive with iterating quickly, there’s no doubt that the currently accepted agile framework is not the most conducive to protecting users.
While companies like Apple and Google may not have an incentive to protect the privacy of their users, they do have a very strong incentive to protect the security of their platforms and devices. At a baseline, we can start there.