Authors: Mitzi László

Personal Data Trading is a model that gives individuals the ability to own their digital identity and create granular data sharing agreements via the Internet. Rather than the current model which tolerates companies selling personal data for profit, individuals would consciously sell their personal data to known parties of their choice and keep the profit. The individual downloads an app and requests their data from different organisations that currently hold it. Once the data had been collected the individual could agree to transfer data to a known party for a known amount. Transactions happen in aggregate form to ensure privacy, for example, 20 per cent of Amsterdam eats muesli for breakfast. The algorithm would be transparently available, and data would be encrypted. By ensuring individuals receive a fair share of the profit generated from their data, PDT results in a more equitable global resource distribution. By allowing individuals to support causes they care about by investing their data, PDT allows for a more balanced say in the allocation of global resources. The model would provide individuals with a wealth distribution system, together with the transparency and efficiency to counter global catastrophic risks.

Title:

1. Abstract

While the tech services are provided for free, the new currency that has been uncovered in the process is personal data. Human digital identity is a consumable product.

Personal Data Trading (PDT) is a model that gives individuals the ability to own their digital identity and create granular data sharing agreements via the Internet. At the core is an effort to re-decentralise the Internet. Rather than the current model which tolerates companies selling personal data for profit, in PDT, individuals would consciously sell their personal data to known parties of their choice and keep the profit.

The ultimate goals are:

More equitable global resource distributionMore balanced say in allocation of global resources

How would PDT work from the perspective of the individual? The individual would download an app, login, and initiate personal data collections. Data transaction offers would be presented to the individual via this app, with the option to accept, reject or counter offer.

Data transaction offers would include: buying party, purchase purpose, price, algorithm used to generate the sold data, time and location frame. For example, walking patterns in Amsterdam over the month of January 2017 to the city council to plan lamppost distribution. Data sale frequency is also stated. For example, monthly subscription or one-off sale? Counter offers would be returned to the data buyers until a deal was closed. If no deal is reached, no transaction occurs.

GOVERNING PRINCIPLES

1.1. OWNERSHIP – The governing principle of PDT is that individuals own their own personal data.

Personal data refers to data sets describing a person ranging from physical attributes to their preferences and behaviour. The collective of one individual's personal data forms a digital identity (or perhaps digital alter ego is more fitting). A digital identity encompasses all of our personal data shadowing, representing and connected to our physical and ideological self. Ownership involves determining rights and duties over property. While the Internet is not owned by anyone, corporations have come to own the personal data, creating value making use of data collection, search engines and communication tools. By default, as a side effect to owning the intellectual property making up the Internet tools, these corporations have been collecting our digital identities as raw material for the services delivered to other companies at a profit.

If one person records their observations on another person who owns those observations? The observer or the observed? What responsibilities do the observer and the observed have in relation to each other? Since the massive scale and systematisation of observation of people and their thoughts as a result of the Internet, these questions are increasingly important to address. Slavery, the ownership of a person, is outlawed in all recognised countries. The question of personal data ownership falls into an unknown territory in between corporate ownership, intellectual property, and slavery. Who owns a digital identity?

1.2. EXCLUSIVITY

1.3. CONSENT

1.4. PRIVACY

1.5. PORTABILITY

1.6. CURRENCY – Under the PDT model, rather than companies selling your data, you as an owner can sell your personal data and keep the profit.

Are free tech services in exchange for your personal data a worthwhile exchange for the consumer? What is the exchange rate of data to money? PDT by individuals in the proposed framework would result in distributed profits amongst the population but also can have radical consequences on societal power structures. It is now widely acknowledged that the current centralised data design exacerbates ideological echo chambers and has far reaching implications on seemingly unrelated decision-making processes such as elections. The data exchange rate is not only monetary, it is ideological. Do institutional processes have to be compromised by the centralised use of communication tools guided by freely harvested personal data?

PDT adds a fourth mechanism for wealth distribution, the other three being salaries via labour, property ownership, and company ownership. Regardless of education or socioeconomic class, everyone is born with valuable identity data. PDT can provide a mechanism for a universal basic income.

RESPONSIBILITIES

The principles of PDT need to be implemented by fulfilling certain responsibilities. Existing entities suited for the responsibilities have been identified. To avoid conflict of interest no one entity could take on more than a single responsibility. Multiple entities can carry out the same responsibility. In this way, individuals can select their preferred entity resulting in innovation enhancing competition as well as maintenance of the decentralised ambition of PDT.

2.1. COLLECTION

2.2. STORAGE

2.3. ENCRYPTION

2.4. AUDITING AND ACCREDITATION

2.5. TRANSACTION

2.6. FINANCING

PAVING THE WAY

European legislation makes it a logical sandbox. PDT implements Article 12 of the UN Declaration of Human Rights.

Step 1. OWNERSHIP

A sizeable group of individuals need to beat a path through the process of recuperation and encrypted storage of their personal data.

Achievements: audited homomorphic encryption and seed funding

Step 2. SCALE

The personal data collection app needs to be promoted towards a critical mass.

Achievements: strong network of influential people working with or interested in PDT through years of public speaking on the issue

Step 3. TRADING

People owning data need to sell their data in anonymised aggregate form.

APPOINTING KEY INDIVIDUALS AND DECISION-MAKERS

Re-decentralisation of the Internet through granular data sharing agreements complements the current model by providing a mechanism for the individual to have a voice within the crowd.

Key individuals take on responsibilities are self-appointed through ability. Multiple key individuals taking on the same responsibility will compete for individuals. The competition would shift towards providing innovative solutions rather than a competitive data grab.

The ultimate decision-makers in the PDT models are data suppliers and the data buyers. Data buyers would become more like a political party because they it become valuable to attract individuals to contributing data to your cause.

Returning the value already generated from data back to individuals gives a reason for civic acceptance regardless of which political system they currently are within.

Whoever owns data owns the future.

2. Description of the model

The ultimate goals of the Personal Data Trading (PTD) model are:

More equitable global resource distributionMore balanced say in allocation of global resources

Personal Data Trading (PDT) is a framework that gives human beings the ability to own their digital identity and create granular data sharing agreements via the Internet.

At the core is an effort to re-decentralise the Internet. Rather than the current model which tolerates companies selling personal data for profit, in PDT, individual human beings would directly own and consciously sell their personal data to known parties of their choice and keep the profit.

1.1. THE PRINCIPLE OF OWNERSHIP – The governing principle of PDT is that individuals own their own personal data.

Personal datarefers to data sets describing a person ranging from physical attributes to their preferences and behaviour. Examples of personal data include:

Genome data, GPS location, written communication, spoken communication, lists of contacts, internet browsing habits, financial transactions, supermarket spending, tax payments, criminal record, laptop and mobile phone camera lens recording, device microphone recordings, driving habits via car trackers, mobile and health records, fitness activity, nutrition, substance use, heartbeat, sleep patterns and other vital signs

The collective of one individual's personal data forms a digital identity (or perhaps digital alter ego is more fitting). A digital identity encompasses all of our personal data shadowing, representing and connected to our physical and ideological self. Ownership involves determining rights and duties over property. While the Internet is not owned by anyone, corporations have come to own the personal data, creating value making use of data collection, search engines and communication tools. By default, as a side effect to owning the intellectual property making up the Internet tools, these corporations have been collecting our digital identities as raw material for the services delivered to other companies at a profit.

If one person records their observations on another person who owns those observations? The observer or the observed? What responsibilities do the observer and the observed have in relation to each other? Since the massive scale and systematisation of observation of people and their thoughts as a result of the Internet, these questions are increasingly important to address. Slavery, the ownership of a person, is outlawed in all recognised countries. The question of personal data ownership falls into an unknown territory in between corporate ownership, intellectual property, and slavery. Who owns a digital identity?

Parents or guardians of minors have responsibility for their children's data.

1.2. THE PRINCIPLE OF EXCLUSIVITY – A key component of ownership is unique and controlled access.

Ownership implies exclusivity, particularly with abstract concepts like ideas or data points. It is not enough to simply have a copy of your own data. Others should be restricted in their access to what is yours. Knowing what data others keep is a near impossible task. The more simple approach would be to cloak yourself in nonsense. To ensure that corporations or institutions do not have a copy of your data it is possible to send noise to confuse the data that they have. For example, a robot could randomly search terms that you would not be inclined to usually search for making that data obtained by the search engine useless through confusion (see: Track Me Not by New York University https://cs.nyu.edu/trackmenot/ – [1]).

1.3. THE PRINCIPLE OF CONSENT – ownership requires informed and explicitly expressed consent of what data moves to whom, when, and for what purpose.

Assuming individuals own their own personal data, use of that data by another person, company or institution, requires the explicit permission of the owner.

The data transaction cannot be used as a bargaining chip for an unrelated or superfluous issue of consent, for example, improve marketing recommendations while you are trying to ring your mother. While there are services where you need to share data, these transactions should not be exaggerated and should be held within context. For example, an individual needs to share data to receive adequate medical recommendations, however, that medical data does not automatically need to go to a health insurance provider. These are separate data transactions which should be dealt with as such. Under PDT, implied consent of accepting the transfer of data ownership because you use a chat application is not considered valid.

The full scope and extent of the transaction needs to be explicitly detailed to the individual who has to be given apt opportunity to engage in the process of evaluating whether they would like to engage. Timing is critical i.e. these issues should be dealt with in a calm moment with time to reflect, not in the moment you want to buy a train ticket or are experiencing a medical emergency.

The permission needs to be given in a format which is explicit, not implied. Just because you chose an application to chat with your partner does not mean that this app needs access to your entire list of contacts. The button which you click to give permission should not be designed in such a way that the automatic behaviour is opting in. For example, in binary choices if one button is smaller than the other, or if one button is hidden in the design and the other jumps out at you, or if one button requires multiple clicks whereas the other is a single click.

While a person could give consent on a general topic to be continuous, it should always be possible to retract that permission for future transactions. Similarly to consent for sexual activity, retraction of past consent for data transactions is not feasible. For example, it would be possible for an individual to give consent to use their personal data for any cause advancing the treatment of cardiovascular disease until further notice. Until the human changes their mind, these transactions can continue to occur seamlessly without the involvement of the human.

1.4. THE PRINCIPLE OF PRIVACY – Ownership requires data transactions to occur in such a manner that privacy is preserved.

“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.” – United Nations Declaration of Human Rights Article 12.

Why does privacy matter? Data is useful to make systems more efficient, however, defining the end goal of this efficiency is essential in assessing how ethical data usage is.

The use of data monitoring by government to observe citizens needs explicit authorisation by appropriate judicial process. Possibly it would even be more efficient to observe the relatively small number of criminals manually rather than track the relatively large population. Blanket observation of inhabitants by national governments and corporations is a slippery slope to an Orwellian style of governance. Privacy is a not about keeping secrets, it is about choice, human rights, freedom, and liberty. For example, sharing your medical data with your doctor under the understanding that it will be used to improve your health is ethically sound, even when the doctor reveals that data to another doctor. However, when that same data is shared with a marketing agency as just happened with the British national health system and Google’s DeepMind artificial intelligence company the ethical implications are more uncertain (Google DeepMind and healthcare in an age of algorithms by Julia Powles and Hal Hodson https://link.springer.com/article/10.1007/s12553-017-0179-1 [2]). Privacy is about choosing the context; what data you share, with who, for which purpose, when. Privacy is currently not being implemented possibly because the personal power and wealth gain from not doing so is acting as a disincentive for both private companies and governments. Also, using data to measure actual social impact could reveal inefficiency which would be inconvenient to the politicians involved or the companies’ claims.

The public debate on privacy is often unfairly obscured to an over-simplistic binary choice between privacy and scientific progress. The marketing campaigns have even dismissed critics of centralised data collection as resisting progress and holding on to the past. However, the benefits from scientific progress through data can be achieved in a manner consistent with privacy values as has historically been the case in epidemiological research. The extraction of value from data without compromising identity privacy is certainly possible technologically; e.g., by utilising homomorphic encryption and algorithmic design which makes reverse engineering difficult.

Homomorphic encryption allows the chaining together of different services without exposing the data to each of the services. Even the software engineers working on the software would not be able to override the user. Homomorphic encryption schemes are malleable by design meaning they can be used in a cloud computing environment while ensuring the confidentiality of processed data. The technique allows analytical computations to be carried out on ciphertext, therefore generating encrypted results which, when decrypted, match the results of operations performed in plaintext.

The results of analytics can be presented in such a way as to be fit for purpose without compromising identity privacy. For example, a data sale stating that “20% of Amsterdam eats muesli for breakfast” would transmit the analytical value of data without compromising privacy, whereas saying that “Ana eats muesli for breakfast” would not maintain privacy. Algorithmic design and the size of the sample group is critical to minimise the capacity to reverse engineer statistics and track targeted individuals. One technical solution to reverse engineering of aggregate metrics is to introduce fake data points that are about made up people which do not alter the end result, for example the percentage of a group that eats muesli.

1.5. THE PRINCIPLE OF PORTABILITY – Ownership puts emphasis on the ability to conveniently move data from one service to another.

When personal data is owned by the individual they have the option to simply remove it and take it to another site if they become dissatisfied with the service. PDT achieves a high degree of convenient portability by allowing humans to switch to alternatives without losing historic data collections describing product preferences and personal conversations.

For example, you may choose to switch to an alternative messaging app other than WhatsApp, and under personal data trading it would be possible to transfer your previous conversations from WhatsApp to a new messaging service, such as, Signal. Giving humans the option to switch services without the inconveniences of losing historical data means that the services need to keep customers happy by providing good services rather than locking them in by means of incompatibility with alternatives.

For portability, data expression must be standardised in such a way that this can happen seamlessly. For example, describing the unit as “kilograms” rather than “kg” means that robots recognise them as different, although they are the same. These small variations can result in messy data that cannot easily be combined or transferred into a new system which cannot recognise them. Currently, Apple states that they provide privacy services, however, it is difficult to extract data from Apple systems making it difficult to migrate to an alternative. In the PDT framework, the data expression would be standardised for easy portability with the click of a button.

Standardisation would also facilitate the setting up of mechanisms to clean data necessary to install checks and balances validating the quality of the data. By joining multiple sources you would be able to identify erroneous or falsely entered data.

1.6. THE PRINCIPLE OF CURRENCY – Under the PDT model, rather than companies selling your data, you as an owner can sell your personal data and keep the profit.

The business models driving tech giants have uncovered the possibility of making the human identity the product to be consumed. While the tech services including search engines, communication channels and maps are provided for free, the new currency that has been uncovered in the process is personal data. This raises the economic question of whether free tech services in exchange for your personal data is a worthwhile implicit exchange for the consumer. What is the exchange rate of personal data to money? How much are tech services such as a search engine, a communications channel and a digital map actually worth, for example in dollars?

The difference in value between the services facilitated by tech companies and the equity value of these tech companies is the difference in the exchange rate offered to the citizen and the 'market rate' of the value of their data. According to Statista, in 2016 Google had a revenue of 89.5 billion dollars and 1 billion gmail users meaning that, each person per year generates roughly 90 dollars in ‘data added value’. Would gmail users be willing to pay 89.5 dollars per year for the service? According to Statista, in 2016 Facebook had a revenue of 27.638 million dollars and 800 million active users. Would facebook users be willing to pay 34.5 euros per year to use Facebook? Scientifically there are many holes to be picked in this rudimentary calculation: the financial figures of tax evading companies are unreliable, would revenue or profit be more appropriate, how do you define an active user, you need a large number of individuals for the data to be valuable, would there be a tiered price for different people in different countries, not all Google revenue is from gmail, etc. Although these calculations are undeniably crude, the exercise serves to make the monetary value of data more tangible. The examples given only cover two cases, but if we extend profits from data sales to other areas such as healthcare the monthly profit per individual would increase.

Personal data trading by individuals in the proposed framework would result in distributed profits amongst the population but also can have radical consequences on societal power structures. It is now widely acknowledged that the current centralised data design exacerbates ideological echo chambers and has far reaching implications on seemingly unrelated decision-making processes such as elections. The data exchange rate is not only monetary, it is ideological. Do institutional processes have to be compromised by the centralised use of communication tools guided by freely harvested personal data?

Data is valuable because it allows you to act more efficiently than when you are guessing or operating using trial and error. There are two elements of data that have value: trends and real-time. Build-up of historical data allows us to make future predictions based on trends. Real-time data gives value because you can act instantaneously.

There is an inadequate use of data in designing policy and private investment. Speculative investment is largely driven through group emotions such as shareholder confidence or political approval ratings. For example, desired wellbeing outcomes may be greater when investing in deprived areas, but the public may support greater policing. The era of emotional spending is now obsolete given the vast information available within arm’s reach to public and private bodies. The currency of today needs to be tied to system efficiency through data driven decision making for long term planning required for global sustainability.

Although parents or guardians of minors below the age of 18 have responsibility of for children’s data, they cannot transact in their child's’ data in exchange for money. Rather, data transactions can only be donations, which opens the possibility to using child data for contexts such as public healthcare and education.

While initially it is realistic to assume that data would be traded for money, it is possible to imagine a future where data would be traded for data. The ‘I’ll show you yours if you show my mine’ scenario could replace money altogether. Importantly, this is a future scenario and the first step is to focus on exchanging personal data for existing monetary currency.

PDT adds a fourth mechanism for wealth distribution, the other three being salaries via jobs, property ownership, and company ownership.

RESPONSIBILITIES

How would PDT work from the perspective of the individual? The individual would download a smartphone application, login, and initiate personal data collections. Data transaction offers would be presented to the individual via this app, with the option to accept, reject or propose a counter offer. The data transaction offer would include information such as: the buying party, the purpose for which they want to buy this data, and the algorithm that will be used to generate the sold data. The algorithm needs to include: the time and location frame of data. For example, walking patterns in Amsterdam over the month of January 2017 to the city council to plan lamppost distribution. The data sale frequency is also stated. For example, will the sale happen on a monthly basis or as a one-off sale? In the case of a counter offer there would be a movement back to the data buyer via the commercial entity and the auditors until a deal was closed. If no deal is reached, no transaction will occur.

The principles of PDT need to be implemented fulfilling certain responsibilities, which are outlined in detail in the sections below. Although every individual could carry out these responsibilities on their own this would be highly inefficient. A degree of resource pooling could speed up the process of PDT. However, this pooling cannot be done at the cost of the principles themselves. Therefore, entities can only take on a single responsibility. Multiple entities can carry out the same responsibility. In this way, individuals can select their preferred entity resulting in innovation enhancing competition as well as maintenance of the decentralised ambition of PDT. Entities could charge individuals to carry out the responsibility.

2.1. COLLECTION – Individuals need to collect their personal data to ensure ownership.

The current sensors collecting data are dispersed geographically and the control of these sensors is often not in the hands of individuals. To ensure ownership individuals need to be able to collect their personal data into a single hub.

Key skills and resources needed to complete this responsibility:

LobbyingTechnical transfer of data e.g. through APIs

Existing entities who could be suited for this responsibility:

Workers unionsConsumer protection organisationsLegal firms

2.2. STORAGE – Personal data needs to be stored somewhere.

Data storage needs to be such that it is clear to the individual exactly where, physically, it is stored, and which jurisdiction applies. Any changes should be communicated transparently, and permission for modification should always be given by the user.

Key skills and resources needed to complete this responsibility:

Data storageDatabase maintenance

Existing entities who could be suited for this responsibility:

Individuals – could simply store their data at home in a personal database.Banks – have experience in storage of valuable items on behalf of individuals.Commercial data storage providers

2.3. ENCRYPTION – The value of data needs to be extracted without compromising privacy.

Encryption ensuring that the value of the data is extracted without compromising identity privacy needs to be in place and constantly maintained.

Key skills and resources needed to complete this responsibility:

Cybersecurity know how, particularly on homomorphic encryption

Existing entities who could be suited for this responsibility:

Cybersecurity companiesOpen source – encryption could be crowdsourced as long as there were processes in place to protect against malicious hackers.

2.4. AUDITING & ACCREDITATION – The algorithms needs to be audited for impact and the data buyers need to be accredited for integrity.

Algorithm design need to be publicly disclosed (O’ Neil, C. (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Penguin Random House [3]). Additionally, auditing bodies would be responsible for making comprehensive impact assessments. Assessments have to be readily available to individuals selling personal data to ensure access to relevant information for well informed choices.

There needs to be checks and balances on the actual use of the data by the buyers. Are they doing what they say they are doing? The assessment would be in place to provide transparency on exactly what the transacted data is being used for, and by whom, and if the claims from companies and institutions buying the data stand up to scrutiny. Data concentration needs to be flagged to avoid a situation where too much data would flow to a single entity.

Key skills and resources needed to complete this responsibility:

StatisticsEthicsQuality assurance and quality control

Existing entities who could be suited for this responsibility:

United Nations – particularly the specialised agencies could be logical existing candidates for a subject specific global algorithm auditing and data buyer accreditation body. The financing of these agencies would need to be such that wealthy individuals cannot influence the neutrality of the process.Existing accreditation bodies, on the basis of international standards (CEN, ISO)A crowdsourced review system – similar to airbnb, uber or amazon product reviews. Discussion forums such as Reddit and crowdsourcing knowledge such as Wikipedia are also examples of how we could tap into public knowledge and review systems to enhance public trust.

2.5. TRANSACTION – Data transactions need to be arranged.

The transaction of data from sellers to buyers, i.e. brokering, needs to be coordinated. The coordination involves the design of the algorithm including the sample and the price.

Key skills and resources needed to complete this responsibility:

SalesUser interface design

Existing entities who could be suited for this responsibility:

Technology companies – without additional services such as search engines and communication toolsBlockchain – The technological hurdle that needs to be overcome is the unit of data. Blockchain works through a publicly shared ledger that lists all transactions and therefore makes it able to decentralise the checks and balances. Data value does not have a logical unit as it is highly subjective and yet to be defined at all. Once the unit of data is resolved it would be possible to use blockchain rather than a commercial entity to arrange the data transactions.

2.6. FINANCING – Individuals need to be paid for their data as do those carrying out the responsibilities making PDT possible.

While the majority of the profit generated from each individual’s data would return to that individual, a thin slice would be paid to the relevant bodies required to automate the data transactions in an ethical manner. So, the not for profit entity providing collection and encryption services, the for profit entity brokering the data transaction, and the auditing body providing checks and balances for the data transactions, would each receive a share of the profit generated.

Key skills and resources needed to complete this responsibility:

Accountancy

Existing entities who could be suited for this responsibility:

Accountancy firmsBlockchainBanks

PRINCIPLES > RESPONSIBILITY > CANDIDATES

Ownership > Collection > Workers unions/ Consumer protection organisations/ Legal firms

Exclusivity > Storage > Banks/ Data storage provider/

Privacy > Encryption > Cybersecurity companies/ Open source

Consent > Auditing & Accreditation > UN/ Review system

Portability > Transaction > Technology companies/ Blockchain

Currency > Financing > Accountancy firms/ Banks/ Blockchain

PAVING THE WAY

The epicentre for developing Personal Data Trading is the European Union because of the legal framework supporting citizen privacy. The EU General Data Protection Regulation (GDPR) replaces the Data Protection Directive 95/46/EC in May 2018 and was designed to harmonise data privacy laws across Europe, to protect and empower all EU citizens data privacy and to reshape the way organisations across the region approach data privacy. Key changes as a result of GDPR can be found on this site: http://www.eugdpr.org/key-changes.html

Although GDPR alludes to the governing principle of PDT i.e. that individuals own their data, it is unclear how the ownership will be practically enforced and technically enabled. How can an individual or a regulatory body know what data is in which company and what, exactly, they are doing with it? If an individual requests their data, how can they store and handle that data without the help of a technology company?

Some individuals have tried to own and even trade in their own data. In 2012, Malte Spitz gave a TED talk on how he submitted multiple requests and lawsuits to collected the 35,830 lines of code from his German phone operator. In 2013, Federico Zannier started a kickstarter campaign to sell his personal data and managed to raise $2733 from a little over 200 backers. But a broad movement of implementation is lacking.

How can we pave the way towards a reality where PDT is convenient and commonplace?

Step 1. Ownership

A sizeable group of first adopter citizens needs to beat a path through the process of recuperation and encrypted storage of their personal data within the PDT framework. The process needs to be streamlined into a user friendly convenient tool such as an app or a site. The current and anticipated European GDPR data protection legislation makes Europe a logical sandbox.

In 2014, I started a full time journey to provide people with a tool for personal data trading. Previously I was working for a Brazilian epidemiology study where I was tasked with walking from door to door collecting data. With so much talk of tech and big data, why are public researchers still walking around with clipboards? In a time when we “create 2.5 quintillion bytes of data” (IBM Website accessed October 2016) and spend 6% of global GDP (World Bank accessed October 2016) on public health, there has to be an alternative to the clipboard. I founded a Foundation with the specific goal of extracting the value of data without compromising identity privacy.

Achievement: Stamp of approval from NCC Group Security audit of encrypted data storage mechanism.

The Foundation has successfully designed and built a piece of homomorphic encryption that allows for the extraction of information without compromising privacy. This encryption was audited by global experts in cyber security and risk mitigation, NCC Group, who also work for clients such as governments and global corporations and has multiple prestigious accreditations. NCC group described our encryption innovation as beautiful and has given the stamp of approval in terms of security. Even the software engineer working on the code cannot override the user. This encryption innovation is critical to making personal data trading possible.

Achievement: Funding through government grants and crowdsourcing.

Personal data ownership can lead to radical redistribution of power which is not necessarily comfortable for by governments, corporations and investors. This is why the funding of this concept has been particularly difficult. Having said that, we have found enormous verbal support and investment in kind from influential individuals who have voiced concern over the issues we tackle and support of our approach. We received 38k euros of funding from a Dutch public ministry to carry out the audit of the software. We have gathered 100 people who are concerned about the issues and crowdfunding building of a tool for personal data ownership, who could be the core of the first adopters.

Step 2. Scale

The personal data collection app needs to be promoted, initially amongst Europeans, so that a critical mass of first adopter citizens own their own data and start being of interest for making data transactions.

Achievement: Strong network including influential people working with or interested in PDT.

Through four years of full time public speaking (see TECH talk https://www.youtube.com/watch?v=eiFBljkGWAc) and building relationships I have a rich network of people working on similar concepts or interested in this concept and who are willing to contribute to making it a reality. This concept has been largely unfunded and the core team is built of people who simply believe in the underlying principle. We have worked with public health ministries and other government bodies and corporate management to explore implementing the concept.

For example, two global marketing agencies and a branding agency have developed the communications material and brand identity. This work was done pro bono. Reputable public figures are essential to gaining public trust and catalysing awareness. In combination with our roll out strategy and communications expertise we believe that engaging with champions could be a viable route.

Notably, in 2016, Sir Tim Berners-Lee, inventor of the world wide web publically voiced similar concerns of personal data flow and has founded Solid, for personal data management.

“Today marks 28 years since I submitted my original proposal for the world wide web. I imagined the web as an open platform that would allow everyone, everywhere to share information, access opportunities and collaborate across geographic and cultural boundaries. In many ways, the web has lived up to this vision, though it has been a recurring battle to keep it open. But over the past 12 months, I’ve become increasingly worried about three new trends, which I believe we must tackle in order for the web to fulfill its true potential as a tool which serves all of humanity.

1. We’ve lost control of our personal data

2. It’s too easy for misinformation to spread on the web

3. Political advertising online needs transparency and understanding”

– Sir Tim Berners-Lee in Spring 2017

Step 3. Trading

The Europeans who own their data will be able to approach companies and institutions and offer to sell their data in anonymised aggregate form on a monthly subscription access basis. Once this transaction process is in place for European citizens who are in a unique position because of their wealth and legislative weight, it will be possible to offer it to non European global citizens. This could potentially be financed by the profit generated from personal data sales.

HOW THE MODEL MANAGES BOTH CURRENT AND EMERGING CHALLENGES AND RISKS

3.1. POVERTY – PDT provides a fourth wealth distribution mechanism for basic income resulting in the return of the equal value of all human beings back to individuals.

3.2. POLITICALLY MOTIVATED VIOLENCE – PDT decreases the opportunity for systematic targeted propaganda via social media, a feature which exacerbates ideological echo chambers.

3.3. RAPID POPULATION GROWTH – PDT providing women with their own income source and frees up resources for education of girls, both recognised factors in determining family size.

3.4. CLIMATE CHANGE – PDT increases the likelihood that resources are channeled towards the good of all humankind, unlocking value through collaboration e.g. supporting sustainable energy farmers by providing transparent energy consumption patterns.

3.5. ARTIFICIAL INTELLIGENCE – Individuals owning their data means that they can remove or constructively contribute the raw material of artificial intelligence.

3.6. PANDEMICS – Research organisations that quantify the social impact of public policy or disaster relief initiatives are likely to receive public support through data sharing because of social empathy and a desire to alleviate pain and suffering by the general public.

PDT complements the current model by providing a mechanism for the individual to have a voice within the crowd. The intention of this mechanism is to have more equitable global resource distribution and a more balanced say in allocation of global resources. Returning the value already generated from data back to individuals you give a reason for civic acceptance regardless of which political system they currently are within. PTD does not interfere with national state sovereignty. States would have an additional tax revenue opportunity on data transactions. Tech companies would need to charge for their services, such as search engines and communication channels, and the competition would shift towards providing innovative solutions rather than a competitive data grab. PDT manages both current and emerging challenges and risks through re-decentralisation of the Internet through granular data sharing agreements.

Whoever owns data owns the future.

3. Motivation

1.0. CORE VALUES

1.1. PTD promotes the equal value of all human beings because resources are generated from personal identity inherent to human life rather than being determined by circumstance e.g. socioeconomic class, education, job, inheritance.

Health and social problems are worse in more unequal countries, for example: life expectancy, literacy, infant mortality, homicides, imprisonment, teenage births, trust, obesity, mental illness, social mobility, imprisonment (Pickett, K. (2009) The Spirit Level: Why More Equal Societies Almost Always Do Better. Allen Lane[4]). The relationship between inequality and many social and health problems is causal resulting in a large impact that reaches across the entire society including the rich.

The current economic models are designed for concentration of resources as opposed to distribution (Piketty, T. (2013) Le Capital au 21 Siecle. Editions du Seuil Belknap Press [5]). When the rate of growth is low, wealth tends to accumulate more quickly from return on capital than from labour. As a result, wealth accumulates more among the top 10% and 1%, increasing inequality. The fundamental force for divergence and greater wealth inequality slowed down between 1930 and 1975 due to unique circumstances: the two world wars, the Great Depression and a debt-fuelled recession. Today we are returning towards patrimonial capitalism where much of the economy is dominated by inherited wealth, threatening to create an oligarchy. Systematic tax avoidance by high wealth individuals and corporations further accentuates wealth inequality and stifles public resources.

Apart from company and property ownership, salaries through jobs are key to wealth distribution across a population. The current path of the job replacement by robots will lead to catastrophic inequality as a result of technology because wealth will be accumulated by shareholders of companies who designed the algorithms and there will be fewer jobs and therefore salaries (Harari, Y. (2011) Sapiens. Harper [6]). As the BBC documentary ‘The Disruptors’ uncovered through a series of interviews and investigation, Silicon Valley has a particular set of values which describes progress as technological advancement without regard for the human value or the environmental system. The business models are designed with an exaggerated focus on marketing and sales meaning that the Internet is being hijacked by the flawed economic models of growth through consumption. The marketing through data sales business model relies on the data being about consumers, however, if jobs are being replaced then who will buy what the robots are producing?

Universal basic income could be the answer to wealth distribution when job salaries decrease. However, tax avoidance by wealthy individuals and corporations means that the public sector does not have an income stream to be able to provide this basic income. Also governments with fewer resources because of the recent colonial history would not be able to pay as much to their citizens meaning that different nations would be unequally equipped. If corporations paid basic income, we would move towards a non-democratic dependence on these corporations with few checks and balances and no mechanism for the individual to voice an opinion or avoid persecution.

Universal Basic Income is a form of social security in which all citizens receive a regular, unconditional sum of money either from a government or some other public institution, independent of any other income.

Rather than depending on a government or corporation to provide a basic income you could generate resources from PDT i.e. cashing in on your identity rather than your time. Regardless of education or socioeconomic class, everyone is born with valuable identity data. PDT can provide a mechanism for a universal basic income without the centralised control of government or corporations.

How much money you could generate from your personal data is still unknown. An educated guess could be the collective revenue from the tech giants divided by the number of users plus a slice of what insurance companies generate. Certainly, there will be variations in the value of different individuals’ personal data. For example, heavy consumers will generate more money from advertising metrics. Unhealthy people will have more value because they will have data that could help us find treatments. Ironically, today, unhealthy people are less incentivised to share their data because of fear of compromising their access to treatment. Possibly, if given the choice, people would be more incentivised to share their data to find treatments to health conditions than to be exposed to advertising.

What would be the impact of personal data trading on the business models of technology firms? Rather than companies profiting from selling personal data, they would need to profit from designing algorithms and arranging the implementation of the metrics. This would stimulate innovation through a natural tendency away from technology monopolists that rely on user dependency via personal data ownership and incompatibility to extract that data into another system. Algorithms would be the central form of shareholder capital. It’s understandable that tech giants may become defensive about the fairness of personal data trading. However, the Internet is an innovation that was funded by public money. The tech giants have become the gatekeepers of the Internet by providing superb services. Considering that the core innovation was developed by public investment, surely the public should get a return (Mazzucato, M. (2013) The Entrepreneurial State: debunking public vs. private sector myths. Print. [7])

Decentralised data ownership would result in more fierce competition forcing robot owners to find a business model beyond the data grab shifting focus to quality and customer service. PDT decreases inequality because everyone has access to a basic income, including the one billion poorest of our species on this planet today.

1.2. PTD works for the good of all humankind by providing a mechanism for a more balanced say in allocation of global resources.

Groups of individuals can congregate digitally by combining their personal data into aggregate analytical metrics. These groupings could be independent of national bodies or companies. The global community of homo digitalis could unite independently of overseeing institutions who artificially divide us into tribes and categories. These analytics guide implementation of solutions. One example could be a cohort providing the data for analytics to politicians allocating the health innovation budgets to show where resources are having the largest impact and under which policy. Another could be a cohort offering data for analytics into energy consumption to renewable energy farmers for them to be able to generate energy to demand.

Because everyone is equally represented in the analytics everyone has a voice. The best quality analytics will be those which the most people contribute to. These will also be the analytics that have the most public concern. Issues that only impact a minority of the population will be highlighted in topic specific analytics. The data consumer would become more like a political party because it would become valuable to attract people to contributing data to your cause. Governance boils down to effective resource distribution, priorities include education, healthcare, social stability and culture. These are factors determining a sense of societal wealth and well-being. By providing a tool that gives the individual a mechanism to democratically influence the global community through personal data control we stimulate decentralised bottom up governance.

2.0. DECISION-MAKING CAPACITY

2.1. Decision-making without crippling delays is possible with PDT because individuals can support the underdog who fixes the problem or remove support from the most powerful who are not fixing the problem.

The actions of those implementing solutions can be tracked transparently to see how effective they are at implementation as well as how well their actions and words match. Efficiency tracking of implementation will encourage accountability as well as narrow the gap for false promises offering emotionally driven solutions with little or no effect other than consolidating the power of the implementor. Individuals can influence decision-making through a focus on issues rather than leader personalities and opportunities to voice an opinion in public are more frequent than the current standard of once every few years. This implies increased flexibility and adaptability. Through ongoing engagement you dilute erroneous decisions. Self-serving institutions move towards being socially responsible and accountable because they have more regular interactions with the population. PDT would provide a mechanism to measure actual social impact with a hard number allowing for quantification of interconnectedness and effectiveness of policy which could catalyse cooperation.

3.0. EFFECTIVENESS

3.1. PDT makes us capable of handling the global challenges and risks because it allows us to operate as a species via the Internet and gives preference to collaboration.

If we have the technical know-how to build sustainable solutions why do we not implement them? Sadly, the sustainable solutions often fail to reach the market, not as a consequence of knowledge deficiencies but rather the result of an unwillingness of established interests who may benefit from perpetuating the unsustainable. Those corporations who are genuinely willing to implement sustainable solutions appear to lose resources on their balance sheet and therefore are less able to carry their principles through in a negative feedback loop. The emphasis on fast shareholder return is the handicap. Sustainability driven companies are takeover targets for less sustainable competitors because some of their intrinsic value (knowledge) is not visible of their balance sheet which makes them appear undervalued.

The natural world is a complex system with thresholds, tipping points, and feedback loops which the current economic models are ill suited to cope with without drifting into irreversible disaster (Raworth, K. (2017) Doughnut Economics. Penguin [8]). Concepts like never ending growth, GDP and models such as the Kuznets curve are based on assumptions which do not hold in nature. Furthermore, the models assume people to act as homo economicus, rational man, an assumption which has been questioned by modern neuroscience and behavioural economics (Ariely, D. (2008) Predictably Irrational. Harper Collins [9])

PDT addresses system flaws by encouraging the channelling of resources to corporations who provide genuinely sustainable solutions. For example, mobile phones are vastly popular but overproduction is at the painful human expense of exploitative working conditions and unsustainable mining practices. In the current model, these human and environmental costs are not incorporated into the price, whereas in the PDT model disadvantaged individuals would have a channel to influence.

3.2. PDT ensures implementation of decisions by offering a mechanism where efficiency is consistently quantified allowing individuals to support those who carry through the implementation effectively.

Healthcare is one example of the key sectors where current economic models fail. PDT means that research institutions and universities would be able to appeal directly to the public to get access to data for research purposes rather than apply for grants to collect data. There would be a much more direct line of communication between citizens and researches. Science would be driven by the questions of the people rather than boards of experts responsible for allocating grants or private investors. Pharmaceutical companies would be able to carry out clinical trials, in particular stage four clinical trials, for less money and with more rigorous scientific method. By directly appealing to the public rather than spending money and recruiting you would eliminate selective biases in clinical trial design. Insurance companies would be able to ask for preventative scores and pay people to stay healthy. While it is true that they could pay individuals to access genetic data, it is unlikely that the majority of people will allow insurance companies to get access for fear of compromised access to affordable healthcare. It is more likely for insurance companies to ask for preventative lifestyle information which would give people the option to reduce their premium through lifestyle choices. This opens up the opportunity for a business model for preventative health care which currently does not exist. Financial and healthcare incentives could finally align. Improved public health care would reduce child mortality rates which, when high, tend to inflate the birth rate.

4.0. RESOURCES AND FINANCING

4.1. PDT has sufficient human and material resources at its disposal because it is financed by taking a slice from the profit from data sales.

Once the proof of concept and the critical user mass has been established, there is a viable route to resources to scale up by tapping into a proven source of revenue, which will be partially diverted from the tech companies to the citizens.

4.2. The PDT resources are financed in an equitable manner because everyone pays a percentage towards maintaining the system.

5.0. TRUST AND INSIGHT

5.1. PDT has no vertical power structure because decision-making is crowd dependent.

Individuals own their personal data meaning that power is decentralised. Responsibilities to make PDT possible are divided in such a way that the risk of a hijack is ring fenced. PDT is a implementation mechanism for Article 12 of the United Nations Declaration of Human rights.

5.2. PDT is transparent in that is gives human beings the ability to create granular data sharing agreements via the Internet.

Within the PDT model, each individual decides on granular data sharing agreements via the Internet. These agreements are completely transparent as to what is moving, where, when, to whom, and for what purpose. Auditing of the algorithms and accreditation of the data buyers is in place to ensure integrity.

6.0. FLEXIBILITY

6.1. PDT allows for revisions and improvements because data is in constant flow therefore is adaptive to changes.

Data is constantly moving and individuals can influence that flow. If it were to become apparent that a data buyer was corrupt or malicious, it would be possible for individuals to immediately alter the flow of data to limit the influence of that data buyer.

7.0. PROTECTION AGAINST ABUSE OF POWER

7.1. The control system in PDT is that individuals can chose not to share the data if any party were to overstep their mandate.

PDT reverts centralised data grab and through encryption averts the possibility of abuse of power.

”In repressive regimes, it’s easy to see the harm that can be caused – bloggers can be arrested or killed, and political opponents can be monitored. But even in countries where we believe governments have citizens’ best interests at heart, watching everyone, all the time is simply going too far. It creates a chilling effect on free speech and stops the web from being used as a space to explore important topics, like sensitive health issues, sexuality or religion.” –Sir Tim Berners-Lee

In our current model the foundations of authoritarian surveillance states are being laid. Furthermore, states and groups of states are in a position to subvert other states through a covert, far-reaching, coordinated plan enabled by a rich get richer system. Clearly, we are in the midst of a massive power grab via our data. This data is being silently and opaquely harvested for exploitation. Whoever owns data owns the future.

8.0. ACCOUNTABILITY

8.1. PDT gives individuals a mechanism to hold the decision-makers accountable for their actions through actively deciding who gets the upper hand as a result of access to information.

Related to accountability, the ultimate decision-makers in the PDT models are data suppliers and the data buyers. Data buyers are entrusted with making good use of analytics in a transparent manner. With more data it is possible to better quantify social impact with hard indicators. The indicators act as a built-in feedback loop allowing for a much stronger response in policy design or strategy by decision-makers. Decision-makers would be held accountable for their actions because the facts and figures would be clearer and more accessible. PDT would allow the public to rewrite the terms and conditions to avoid becoming digital slaves to a faceless few.

References

  • Track Me Not by New York University https://cs.nyu.edu/trackmenot/
  • Google DeepMind and healthcare in an age of algorithms by Julia Powles and Hal Hodson https://link.springer.com/article/10.1007/s12553-017-0179-1
  • O’ Neil, C. (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Penguin Random House
  • Pickett, K. (2009) The Spirit Level: Why More Equal Societies Almost Always Do Better. Allen Lane
  • Piketty, T. (2013) Le Capital au 21 Siecle. Editions du Seuil Belknap Press
  • Harari, Y. (2011) Sapiens. Harper
  • Mazzucato, M. (2013) The Entrepreneurial State: debunking public vs. private sector myths. Print
  • Raworth, K. (2017) Doughnut Economics. Penguin
  • Ariely, D. (2008) Predictably Irrational. Harper Collins