Data privacy is a major concern, given how much of our lives are spent online and how much information consumers share with companies. And recently, increased AI adoption has created new challenges in governing that data, because of new pathways for storing and using personal data. Because of this, it’s imperative that companies have a way to govern user data and to easily comply with state, federal and international regulations concerning data privacy.
Around this same time, what is now GDPR was starting to take shape and companies were waking up to the effect data privacy rights would have on their infrastructures and policies. That said, based on what we learned from our own exercise in attempted data requests, it was pretty obvious that without a technology solution that could make it easier for companies to functionally respond to regulation, data privacy rights would be a lot more bluster than benefit for consumers.
The status quo is to approach data privacy with a manual ‘humans can solve it’ mindset—relying on surveys to understand where data is held, human-dependent workflows to gather personal data , or human actions to ensure data preferences are fully honored . This is bad for business and bad for budgets, but more importantly, it’s bad for the privacy of internet users everywhere.
For example, when looking at the “Opt-out” data preference menus – courtesy of CCPA regulations – you’ll often find what we call “dark patterns" which can significantly, and often effectively, influence consumers to make decisions counter to their true preferences. Think trick questions, misleading information, hidden details, strategically placed color blocks, and fonts designed to manipulate consumer behavior.