Deep Fake Picture Credit: faz.net
Introduction
A few days ago, Google apologized for another embarrassing AI blunder caused by its image-generating model Gemini, modifying pictures with a farcical disregard for historical context. By requesting imagery of certain historical circumstances or well-known individuals, Gemini produced ridiculous results. For instance, the US Founding Fathers were rendered as slave owners and dark coloured individuals. According to Google the problem was the result of bias in training data. Mistakes by Large Language Models (LLMs) are inevitable. They hallucinate, they reflect biases and behave in unexpected ways. But those responsible for the mistakes are not the models but the individuals who generated them.
The growing Threat of Deep Fakes
According to an Essay by Nitin Agrawal Understanding Deep Fakes: The Growing Threat of AI Manipulation | LinkedIn, the technology behind deep fakes is rapidly advancing, with the result that highly realistic and convincing counterfeit videos, audio recordings and images can be produced. These manipulated media files often feature individuals appearing to say or do things they never actually did, causing serious concerns about privacy, security and trust regarding the future of AI. Deep fake technology leverages deep learning algorithms to swap faces or alter existing content seamlessly. By training on vast amounts of data, these algorithms can generate synthetic media that is virtually indistinguishable from real footage. As a result, deep fakes pose a significant threat to the integrity of information provided by the internet. Malicious actors can weaponize this technology to spread false narratives, manipulate public opinion and undermine trust in institutions. Hence, deep fakes have the potential to disrupt democratic processes by fabricating speeches, interviews or debates conducted by political candidates. At this year’s Munich Security Conference, a coalition of twenty tech giants including OpenAI, Meta and Microsoft announced a joint effort to combat deceptive AI content. This initiative responds to mounting concerns over AI-generated deep fakes manipulating electoral processes. The coalition commits itself to developing tools to detect and address misleading AI-generated media, raising public awareness and swiftly removing such content from their platforms. However, in addition to these efforts, individual awareness and critical thinking remain crucial in combating the negative impacts of deep fakes undermining our trust in AI’s usefulness.
The Relationship between Trust and Truth
In respect to Trust, we intuitively know that different levels of trust exist in different kinds of organisations. Low-trust organisations such as dictatorships live by promoting fear. Individuals of high-trust organisations are motivated by a sense of meaning and mission. Another way to contrast low-trust with high-trust organisations is to compare the role of power in achieving the desired outcome. In low-trust organisations power plays the key role in attaining defined goals. In high-trust organisations authority is vital. It must be distributed across the entire organisation and aimed at a shared mission. To safely and reliably allow members of an organisation to act accordingly, three prerequisites must be met: character, competence and authority. When all three conditions are present, trust develops naturally, almost reflexively.
Truth is usually considered to be the opposite of falsity. The concept of truth is discussed and debated in various contexts including philosophy, art, theology and science. Over the past centuries many concepts of truth have emerged. Most commonly, truth is viewed as the correspondence of language or thought in a mind-independent world. Called ‘the correspondence theory of truth’, the theory maintains that a proposition is true if and only if it corresponds to a fact in the world. Hence, the correspondence theory anchors truth through reality. In addition, this theory is also considered ‘the common-sense view of truth’ with the implication that only humans – possibly with the support of AI-systems – can experience truth.
Four Ways to Protect against Deep Fakes in 2024 and beyond
According to an Essay just published by the World Economic Forum (weforum.org), disinformation is ranked as a top global risk with deep fakes as one of the most worrying uses of AI. The essay suggests the following means of protection:
1.Application of AI-Technology: With the application of machine learning and neural networks, AI-systems can analyse digital content for inconsistencies, typically associated with deep fakes. Forensic methods used to investigate criminal activities may examine facial manipulation to verify or disprove the authenticity of a document.
- Government policies: Based on the proposed AI Act in Europe and the Executive Order on AI in the US, governments are attempting to introduce accountability and trust by verifying to users the authenticity of content. International consensus on ethical standards, definitions of acceptable use, and classifications of what constitutes a malicious deep fake are needed to create a unified front against misuse.
- Zero-trust mindset: The ‘zero-trust’, approach means that no content should be trusted unless it can be verified. Zero-trust calls for a healthy dose of scepticism and constant verification of the content presented. This mindset aligns with mindfulness practices that encourage individuals to pause before reacting to emotionally triggering content and rather engage with digital content intentionally and thoughtfully.
How to Build a Zero-Trust Security Model
Back in 2021 the White House issued an executive order calling on federal agencies to move toward a zero-trust security strategy, citing cloud adoption and the inevitability of data breaches as key drivers. Enterprises planning zero-trust transitions should also consider creating dedicated, cross-functional teams to develop strategies to drive implementation. Ideally, a zero-trust team should include members with expertise in the applications of network and infrastructure security as well as security related to user and device identity. However, vendor marketing related to zero-trust strategies and products can be confusing and even downright incorrect. Today no ‘one-size-fits-all’ zero-trust product exists. Rather, zero-trust is the overarching strategy involving a collection of tools, policies and procedures that build a strong barrier around work processes to ensure data security. Zero-trust is a journey, not a destination. It takes a lot of planning and teamwork, but in the end, a zero-trust security model is one of the most important initiatives which an enterprise should adopt.
Best Deep Fake Detector Tools
As the technology behind deep fakes has advanced, so too have the tools and techniques designed to recognize deep fakes. According to an article published by unite.ai 5 Best Deepfake Detector Tools & Techniques (February 2024) (unite.ai), the following summarizes some of the most widely used tools and techniques available today:
Sentinel produced by Thales: Sentinel’s deep fake detection technology uses advanced AI algorithms to analyse the uploaded media and determines if it has been manipulated. The system provides a detailed report of its findings, including a visualization of the areas of the media that have been altered.
Intel’s Real-Time Deepfake Detector: Intel has introduced a real-time deep fake detector known as FakeCatcher. This technology can detect fake videos with a 96% accuracy rate, returning results in milliseconds. FakeCatcher looks for authentic clues in real videos by assessing a user’s ‘blood flow’ in the pixels of the video. When our hearts pump blood, our veins change colour. These blood flow signals are collected from all over the face and based on deep learning algorithms the system can instantly detect whether a video is real or fake.
Microsoft’s Video Authenticator: Microsoft’s Video Authenticator tool analyses a still photo or video to provide a confidence score that indicates whether the media has been manipulated. It detects the boundaries of subtle grayscale elements that are undetectable to the human eye and provides a confidence score in real-time, allowing for immediate detection of a deep fake.
Deep Fake Detection based on Mismatches of Mouth Shape and Wording: Developed by researchers from Stanford University and the University of California, this technique exploits the fact that the dynamics of the mouth shape are sometimes different or inconsistent with the spoken words. Advanced AI algorithms analyse the video and detect these inconsistencies. A mismatch provides a strong indication that the video is a deep fake.
Conclusion
Deep fakes have the potential to disrupt everything from personal relationships to political elections, causing serious damage to the future acceptance and trust in AI. As the technology behind deep fakes continues to advance, so too must our methods of detection. Moreover, it is important to remember that technology alone cannot solve the problem. Education and an increasing public awareness about misinformation are vital in battling the threat of deep fakes.