Is Facebook a ‘Digital Gangster’? What about Ethics?

Posted by Peter Rudin on 1. March 2019 in Essay

Introduction

Following an inquiry launched in 2017 as concern grew about the influence of false information and its ability to be spread unscrutinised on social media, a UK Parliamentary Committee has published its findings on February 18, 2019 after an 18-month investigation into disinformation and fake news. The report accuses Facebook of obstructing its inquiry and “intentionally and knowingly” violating data privacy laws. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day”, warned the committee’s chairman, Damian Collins. “Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law”.

The report calls for sites such as Facebook to be brought under regulatory control, arguing “social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites”. It proposes comprehensive new regulations, including a mandatory code of ethics and an independent regulator empowered to bring legal proceedings against social media companies and force them to hand over user data.

Meanwhile the US government and Facebook are negotiating a settlement over the company’s privacy lapses that could require the online social network to pay a multibillion-dollar fine, the Washington Post reported early February 2019. The FTC has been investigating revelations that Facebook inappropriately shared information belonging to 87 million of its users with the now-defunct British consulting firm Cambridge Analytica. An eventual settlement may also mandate changes in how Facebook does business.

Ethical issues addressed by the report

Besides the ethical and legal issues of “intentionally and knowingly” violating data privacy laws, the report addresses additional ethical issues:

  • Deliberate content disinformation and misinformation:
    The report defines ‘disinformation’ as the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm, or for political, personal or financial gain. ‘Misinformation’ refers to the inadvertent sharing of false information. This proliferation is made more dangerous by focussing specific messages on individuals as a result of ‘micro-targeted messaging’—often playing on and distorting people’s views. This distortion is made even more extreme by using ‘deep fakes’, audio and videos that look and sound like a real person, saying something that that person has never said.
  • Use of personal and inferred data
    In the UK, the protection of user data is covered by the EU’s General Data Protection Regulation (GDPR). However, ‘inferred’ data is not protected; this includes characteristics that may be inferred about users not based on specific information they have shared, but through analysis of their data profile. This, for example, allows political parties to identify supporters on sites like Facebook, through the data profile matching and the ‘lookalike audience’ advertising targeting tool. According to Facebook’s own description of ‘lookalike audiences’, advertisers have the advantage of reaching new people on Facebook “who are likely to be interested in their business because they are similar to their existing customers”.

According to the report, the proposed code of ethics, which also covers illegal content, would be overseen by an independent regulator with the power to launch legal action against companies who breach it. To define the necessary standards, organisations engaged in standard definitions and certifications can be brought into the process.

The IEEE Global Ethics Initiative

The Institute of Electrical and Electronics Engineers (IEEE), the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity with over 420,000

members in more than 160 countries, has developed technology standards for decades. In December 2016, it announced ‘The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems’ (The IEEE Global Initiative). Its mission is to ensure that every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity. Through reviewing the philosophical foundations that define autonomy and ontology, The IEEE Global Initiative addresses the alleged potential for autonomous capacity of intelligent technical systems and asks whether decisions made by amoral systems can have moral consequences. The IEEE Global Initiative brings together several hundred participants from six continents, who are thought leaders from academia, industry, civil society, policy and government in the related technical and humanistic disciplines. The major goal of The IEEE Global Initiative is to provide recommendations for standards to the existing IEEE P7000™ Standards Working Groups:

P7000 – Model Process for Addressing Ethical Concerns During System Design: Outlines an approach for identifying and analysing potential ethical issues in a system or software program from the onset of the design effort.

P7001 – Transparency of Autonomous Systems Standard: Provides a standard for developing autonomous technologies that can assess their own actions and help users understand why a system makes certain decisions in different situations.

P7002 – Data Privacy Process Standard: Specifies how to manage privacy issues for systems or software that collect personal data. It will do so by defining requirements that cover corporate data collection policies and quality assurance.

P7003 – Algorithmic Bias Considerations Standard: Provides developers of algorithms for autonomous or intelligent systems with protocols to avoid negative bias in their code. Bias could include the use of subjective or incorrect interpretations of data like mistaking correlation with causation.

P7004 – Standard on Child and Student Data Governance: Provides processes and certifications for transparency and accountability for educational institutions that handle data meant to ensure the safety of students.

P7005 – Standard on Employer Data Governance: Provides guidelines and certifications on storing, protecting, and using employee data in an ethical and transparent way. The standard recommends tools that help employees make informed decisions about their own personal information profile.

P7006 – Standard on Personal Data AI Agent Working Group: Addresses concerns raised about machines making decisions without human input. This standard is designed to mitigate the ethical concerns when AI systems can organize and share personal information on their own.

P7007 – Ontological Standard for Ethically Driven Robotics and Automation Systems: Establishes a set of ontologies with different abstraction levels with concepts that are necessary to establish ethically driven methodologies for the design of Robots and Automation Systems.

Late 2017, in order to ensure that the future of AI development remains ethical and socially conscious, the IEEE announced three new AI ethics standards to prioritize humans as they promise to keep up with growing progress in the field. These new standards will become part of a living IEEE document, titled ‘Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems’. The next version of this document will be released in 2019.

P7008 – Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems: Nudges are defined as overt or hidden suggestions designed to influence human behaviour or emotions. The standard delineates the concepts and the functions necessary to establish and ensure ethically driven methodologies in accordance with worldwide ethics and moral theories.

P7009 – Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems: Establishes a practical, technical baseline of specific methodologies and tools for the development, implementation and use of effective fail-safe mechanisms in autonomous and semi-autonomous systems.

P7010 – Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems: Identifies well-being metrics relating to human factors directly affected by intelligent and autonomous systems to establish a baseline for the types of objective and subjective data these systems should analyse and include.

How to proceed

On Jan.1, 2018, Germany began enforcing strict rules that could fine major internet sites such as Facebook, Twitter and YouTube up to 50 million Euros per case, if they don’t remove posts containing hate speech within 24 hours of receiving a complaint. The law requires companies to maintain an “effective and transparent procedure for dealing with complaints” that users can access readily at any time. Upon receiving a complaint, social media companies must remove or block “obviously illegal content” within 24 hours, though they have up to a week when dealing with “complex cases.” It is likely that other EU member countries including the UK will follow this directive.

According to Facebook the number of employees checking to crack down on hate speech has been tripled to 30’000 over the last 12 months. “There are cases that are very obvious and where AI can be used to filter those out or at least flag violations for moderators to decide,” Yann LeCun, chief AI scientist for Facebook AI Research, said in a recent interview with Business Insider. “But there are many cases which are difficult to detect unless you have a broader context.  For that, the current AI technology is just not there yet.” We need machines with ‘common sense’ to learn about the world through data and to understand the meaning of content”.

Conclusion

The enormous growth and near-monopolistic powers of Google, Facebook and Amazon are based on a business model of profiling individuals at an unprecedented level of detail and by monetizing these profiles. According to the EU’s GDPR (General Data Protection Regulation), internet users own their data. However, the algorithmic interpretation and profiling of their data is not transparent to the user, opening the possibility of massive psychological manipulation without awareness or consent by the user. It could be vital that IEEE’s proposed standard ‘P7008 for Ethically Driven Nudging by Robotic, Intelligent and Autonomous Systems’ in combination with strong government supported regulations against misuse can restore trust into internet applications and social media. Fighting this attempt, as perceived by the UK’s Parliamentary Committee when investigating Facebook, is likely to provoke the ‘Digital Gangster’ image.

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *