In the cooperative R&D project defame Fakes, we are working on recognising deepfakes and manipulations in digital image and video content and initiating awareness measures.
Whether it's DALL E 3, face filters on TikTok and Instagram, or DeepFaceLab: technologies that allow the manipulation of videos and images are booming, becoming increasingly accessible and marking a turning point for many people when it comes to trusting digital content. How to recognise what is real when the fakes are increasingly realistic? From public authorities and administration to media organisations, the private sector and civil society, everyone is faced with the challenges and dangers posed by deepfakes - for example as a threat in the area of cybercrime, but also when it comes to democratic processes.
As part of the cooperative R&D project defame Fakes, we are therefore working on the detection of deepfakes and manipulations in digital image and video content and initiating awareness measures. The aim is to counteract the continuing erosion of trust in digital content, strengthen the technological capabilities for recognising image and video manipulation, and thereby protect companies and society.
The activities of the research project range from the generation of a data collection for the research of assessment tools, through the evaluation of threat scenarios and the resulting requirements, to the iterative conceptualisation of preventive and reactive measures. The focus is on the following three core areas:
- The research and development of suitable and comprehensible assessment tools for the detection of deepfakes and manipulations in large collections of digital image and video content.
- The design and initiation of preventive awareness-raising activites in society in order to support the national implementation of the Deepfake Action Plan, in close cooperation with public authorities, media, GSK partners and other relevant stakeholders, and ultimately to take measures to raise knowledge and awareness.
- Analysing the threat scenarios, the social, ethical and legal implications of deepfakes and the associated need for regulation.
defame Fakes is funded by the KIRAS security research programme of the Federal Ministry of Finance.