ÖIAT Research

The Austrian Institute for Applied Telecommunications (ÖIAT) has been promoting the competent, safe and responsible use of digital media for more than 20 years. Together with our partners, we conduct research on current topics related to the digital world. On this page we provide an overview of past and current research projects.

TGuard


Coordination
Go to https://www.fh-salzburg.ac.at/
Partner
Go to https://www.aies.at/
Go to https://www.ait.ac.at/
Go to https://www.bmlv.gv.at/
Go to https://www.neke-neke.com/
Go to https://oiat.at/

Funding
Go to https://www.kiras.at/
Go to https://www.ffg.at/
Go to https://www.bmf.gv.at/
Contact

Julia Krickl
Co-Head of Research & Innovation
krickl@oiat.at
+43-1-595 2112-28

2025/012026/12
Artificial Intelligence Disinformation Social Media

Disinformation campaigns and fake news pose a serious threat to democratic societies. They undermine trust in the media and public institutions, distort public opinion, and can also cause significant economic damage. Rapid technological advancements—particularly in the areas of automated text, image, and video generation—further exacerbate this issue. While technological capabilities are evolving at a rapid pace, societal discourse and academic research often struggle to keep up. Within the framework of the TGuard project, innovative methods are being developed to detect disinformation on social media platforms and to formulate effective strategies to prevent AI-generated false information.


Fake news refers to deliberately disseminated false or misleading information intended to manipulate public opinion or serve specific interests. The Austrian Armed Forces identified disinformation campaigns in their Risk Assessment 2024 as one of the most significant national security challenges—particularly in the context of hybrid threats. The spread of such content is becoming increasingly efficient, rapid, and difficult to detect due to social media platforms, fake news networks, and the use of bots and AI-generated content.

The TGuard research project takes a comprehensive approach to analyzing and combating disinformation in digital spaces, investigating both the generation and detection of fake news. By examining both perspectives, the project enables the development of a holistic system that not only enhances state-of-the-art detection methods but can also be used for awareness-raising initiatives. A central outcome of the project is a structured catalogue of requirements that systematically captures the needs of end users and serves as a foundation for the development of practical, user-oriented solutions.

The project focuses on the development of innovative technologies for the automated detection of fake news and social bots on platforms such as TikTok and YouTube. In addition to identifying existing disinformation, the mechanisms behind its dissemination are analyzed. Continuous monitoring, adaptation, and safeguarding of large language models and image generation models like Stable Diffusion are essential to ensure their integrity and to prevent their misuse as tools for spreading disinformation. A secure testing environment is also being developed in which AI-powered detection systems can be trained and refined.

TGuard combines technological research with social responsibility. In addition to analyzing technical vulnerabilities, the project also addresses the regulatory and ethical dimensions of AI use in public communication. A particular focus is placed on preventive measures: An interactive demonstrator—based on the open-source platform Mastodon—offers users the opportunity to realistically experience the dynamics of fake news networks. This virtual learning environment illustrates how automated disinformation campaigns operate and allows the testing of new, target-group-specific strategies for combating them. Through this preventive approach, the project aims to strengthen individuals' ability to critically assess information—regardless of the topic—without fostering general distrust in the media.

TGuard is funded by the KIRAS security research programme of the Federal Ministry of Finance.