Content Personalization vs. Privacy and Trust

Posted by Peter Rudin on 8. September 2017 in Essay

Picture Credit: Wikimedia Commons

Introduction

Online content is becoming increasingly personalized. As firms have expanded their data collection efforts and as new tools enable aggregators to link data across sites, everything from the advertisements a user sees to the top search results on Google has been enhanced to maximize personal relevance. Algorithms often examine items such as a user’s prior browsing, geolocation, consumer file marketing data, and social network connections to determine what content should be displayed to whom. As a result, a substantial portion of what appears on websites and apps is now routinely tailored to each individual viewer.

There are both costs and benefits to personalization. Critics point out the problems caused by a homogeneous information environment, reducing personal privacy brought about by corresponding data collection practices, and the ways in which firms organize content to serve their own financial interests at the expense of users. Creating echo-chambers and filter-bubbles threatens a deliberative democracy and obstructs critical thought.  On the upside, given the size and speed at which digital content is produced, personalization provides necessary filtering for an otherwise unapproachable web. Personalization thus both limits the content that users can view, in sometimes questionable ways, while providing a necessary service for those hoping to find information in online media. The unsettling question is whether the latest wave of innovation in online content personalization technologies has shifted decision-making power from humans to computers. In his publication ‘Understanding Media (1964)’, one of the cornerstones in the study of media theory, Canadian Professor Marshall McLuhan warned:

“Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don’t really have any rights left. Leasing our eyes and ears and nerves to commercial interests is like handing over the common speech to a private corporation, or like giving the earth’s atmosphere to a company as a monopoly.”

The Application of Personalization

The key equation and business model applied by internet sites and social media platforms like Amazon, Google, YouTube, and Facebook is to provide services for free in exchange for the users own personal data which in turn is monetized by the service provider to generate advertising income. The invasion of AI technology and Neuroscience is crossing the border into our private sphere whereby our decision making can be manipulated at an unprecedented level. With the equation, ‘free services against data’, US based tech giants are amassing immense amounts of data on individual users, compiling individualized virtual footprints. Combining these virtual footprints—their clicks, downloads, purchases, ‘likes’, and posts—with psychology and neuroscience, enables search engines or social media platforms to model human behavior and predict current and future interests. These interests can be applied to product and service marketing as well as to opinion making as demonstrated during the last US presidential election. Cambridge Analytica, one of the leading companies in behavioral influencing, states on its website:

“With up to 5,000 data points on over 230 million American voters, we build your custom target audience, and then use this crucial information to engage, persuade, and motivate them to act. Cambridge Analytica will equip you with the data and insights necessary to drive your voters to the polls and win your campaign.”

The power of personalized prediction in marketing has been demonstrated in a much-publicized 2012 media story where Target, one of the largest US shopping sites, was able to identify a pregnant teenager before her father, simply based on her internet search history.  Internet search data, when combined with the power of behavioral science, can reveal very unique things about individuals, even life-changing events, like pregnancy.

Personalized advertising works and the market for it is steadily on the rise. Self-confirming personalization could generate confirmation biases—a world where our beliefs and perceptions are constantly confirmed by our personalized media, entering us into an endless confirmation loop and an echo-chamber with no critical feedback. Platforms like Facebook, Twitter, and Google have content visibility, personalization, and data sharing methods that are based on private algorithms and policies. These algorithms and policies are often opaque or inaccessible to the public, yet can yield immense influence. Through personalization we become parts of algorithmic systems that we don’t control. Indeed, there lies a great paradox in the very core of personalization. Contrary to what the concept suggests, we can’t actually personally control personalization. We can’t control our algorithmic selves.

‘Smart home’ applications based on IoT devices or bidirectional loudspeakers are opening new paths to profile the user and to offer personalized services including emotion sensing. Combined with advances in voice and language processing, our dependency on our ‘virtual partner’ like Alexa will steadily grow. Consequently profiling an individual based on his internet footprint is likely to continue. We will continue to use free services by periodically signing lengthy on-line ‘consent agreements’ which most of us don’t read anyway. US tech giants equipped with huge financial resources will further expand their free service offerings to lock in customers and to maintain market control through innovation as some of the most innovative AI Start-ups are continuously acquired by Google, Apple, Facebook and Co.

Privacy and Data Protection

Contrary to an authoritarian government, a democracy offers the opportunity to institutionalize and legalize services that maintain our constitutional rights. There are several governmental efforts under way to implement or improve existing data protection laws.

As an example the European Commission has set-up data protection directives which came into force in May 2016.  EU Member States have to transpose these directives into their national law by May 2018. The objective of this new set of rules is to give citizens back control over their personal data, and to simplify the regulatory environment for business. The data protection reform, also referred to as GDPR (General Data Protection Regulation) is a key enabler of a Digital Single Market which the EU Commission has prioritized.

Under the new regulations becoming effective in May 2018 any company or individual that processes personal data is also responsible for its protection, including third parties such as cloud providers.

With GDPR in place, personal data can only be gathered legally under strict conditions, for a legitimate purpose. Furthermore, persons or organizations which collect and manage our personal information must protect it from misuse and must respect certain rights of the data owners which are guaranteed by EU law. The consequences of GDPR are far reaching, for example:

  • It regulates the transfer of data across country borders.
  • It requires organizations to include data protection considerations in the core of their business when developing new solutions and services.
  • It issues fines in case of non-compliance – as high as 10% of the global annual turnover.

Regarding the transfer of data to the US, the European Commission adopted its decision on the EU-U.S. Privacy Shield on July 16, 2016. This new framework protects the fundamental rights of anyone in the EU whose personal data is transferred to the United States as well as bringing legal clarity for businesses relying on transatlantic data transfers.

Since the Commission signed the Privacy Shield agreement with officials from former President Barack Obama’s administration, almost 3,000 companies have signed up to “self-certify” that they comply with the program’s safeguards – allowing them to transfer consumers’ personal data from the EU to the US. The arrangement is a tailor-made fix to keep up data flows to the US, even though American data protection rules don’t meet the EU’s upcoming privacy standards. The first yearly review between US officials and the EU regarding the privacy shield is due mid-September 2017. With Trumps track record of erratic decision making, there is considerable concern on both sides of the Atlantic that the Privacy Shield agreement might be sacked by the Trump administration in the months to come.

The coming years will show how effective GDPR will be in protecting private data. Transaction-based service companies such as on-line shops will have to operate under stricter privacy rules. The implementation of GDPR will help to promote trusted services with users willing to pay a monthly fee for data protection and data acquisition. Examples of data/content-centric services being offered are:

  • Presenting alternative choices of diverse political and social views. We are cognitively and socially biased beings, vulnerable to algorithmic manipulation. This service makes sure that we are not stuck on repeating behavioral loops, filter-bubbles or virtual echo-chambers.
  • Providing control over our personal data to improve our online-security, helping us to decide who can access our digital trails, keeping our personal data safe and in our own hands. We decide what is remembered and what is forgotten or what is passed on to our trusted partners and heirs.
  • Managing complexity to ensure that different environments and devices stay in our control. Our future home works the way we want and doesn’t turn into another hyper-contextual marketing platform.

The Issue of Trust

Trust is an emotional issue and very much intertwined with our decision making. Intuition and reflection, balancing emotion with reasoning, is part of human history and vital in the continuing evolution of mankind. Consequently it is vital that we understand ourselves and our behavior. Continuing advancements in AI and Neuroscience will lead to intelligent machines which surpass human intelligence yet humans have to control them. Solid mental health and emotional stability are a prerequisite to handle this increasingly complex task. Psychologists and philosophers have made significant contributions to explore the realm of human thought and behavior. The following diagram provides an abstract view when considering issues of trust. The diagram depicts how emotions, thoughts and behaviors all influence each other while the core beliefs are defined by you, others and the future:

Picture Credit: Wikipedia

We all know that it takes years building trust, seconds to break and forever to repair. Trust requires transparency, something the big tech companies only partially provide. We really don’t trust Google, Facebook, Amazon and Co. but we like the convenience their products and services are providing. For how long can we maintain this balancing act as machines steadily become more intelligent?

Leave a Reply

Your email address will not be published. Required fields are marked *