Researcher Julia Krickl warns of the dangers of online fraud via deepfakes. (symbol image)
Source: Imago
ZDFheute: The use of artificial intelligence (AI) and so-called deepfakes opens up new opportunities for criminals to commit online fraud. They research deepfakes. How would you explain what you were dealing with to a layperson?
Julia Krickl: Fraud has a tradition. But with generative AI, this reaches a new dimension. Whereas previously photos had to be painstakingly developed and assembled, now just a few clicks and short text commands are enough to create a realistic looking image, video or sound recording. As a result, scams designed to trick people out of their money have become faster, easier and more sophisticated.
AI makes manipulation a mass commodity and presents enormous challenges to society.
Julia Krickl, Austrian Institute of Applied Telecommunications
… is a senior researcher at the Austrian Institute of Applied Telecommunications (ÖIAT). There he worked on projects focusing on algorithms, artificial intelligence and ethical issues, especially in the areas of e-commerce, social networks and cybercrime.
ZDFheute: What will happen to all of us?
Krikl: For example, we see how fraudulent advertising floods the internet: criminals use fake images of TV stars, politicians or even doctors to lure people into investment traps or sell them dubious funds. The number of such fake campaigns is in the thousands – a clear indication that trusting online images and videos is becoming increasingly risky.
With artificial intelligence, it’s becoming easier to create so-called deepfakes – that is, real-looking videos of real people. This poses a great danger.
October 11, 2025 | 1:33 min
ZDFheute: As a person who is not in the spotlight, can I also be abused with deepfakes?
Krikl: Yes, anyone can become a victim of a deepfake – especially if their photos circulate freely on the Internet. Therefore experts strongly recommend checking social media profiles personal to provide.
Deepfakes often emerge in the context of intimate violence: women in particular are digitally stripped naked or edited into pornographic content without consent – a massive violation of self-determination.
Julia Krickl, Austrian Institute of Applied Telecommunications
ZDFheute: They also warn that deepfake attacks often result in large financial losses. Are there any common scenarios?
Krikl: One of the frauds we deal with most frequently is investment fraud. Criminals are increasingly using celebrities to advertise supposedly profitable investments – from stocks to cryptocurrencies. This stitching improved a lot. Therefore, be careful: As soon as a politician or a famous TV personality suddenly advertises a financial investment, it is advisable to be extremely careful. Because usually there is pure investment fraud behind it.
How much money is made from sexual violence and digital abuse? Who is in the deepfake porn business?
11/12/2024 | 28:28 min
ZDFheute: With other fraud methods, things are more complicated. How to know if a video or sound is fake?
Krikl: Today’s fake videos can often still be recognized by their details: unnatural lip movements, inappropriate sounds, messy transitions between faces and background, or conspicuous hands. But technology is developing rapidly. Models like Google’s “Veo 3” already produce clips that are difficult to detect with the naked eye. In the future, it will be important to critically examine content in context and use technical testing tools to detect fraud.
Criminals use celebrity deepfakes to promote fake investments. With cloned voices, manipulated facial expressions and customized lip movements, the videos look real. They use social media to lure victims to fake platforms, where real profits initially create trust before payments are blocked and high fees are charged. Many lost large amounts of money.
“The strategic use of deepfakes is new and especially dangerous because it uses images, sounds, and movements that appear real to create stories that appear compelling and credible, which can trick even cautious people into making risky investments they might never have agreed to without this manipulation,” said researcher Julia Krickl.
The next level of fraud is outright deepfakes: real-time video calls with fake financial experts or influencers that fake personal advice. A famous case shows how news broadcasts are manipulated to make fake platforms appear legitimate.
Criminals specifically use deepfakes to defraud investors and artificially manipulate stock prices. Celebrities and financial experts are digitally faked to promote worthless stocks as profitable options. Victims end up in WhatsApp groups via social media and fake sites, where real profits create trust. But criminals use the so-called “pump and dump” principle: prices are first raised and then sold for profit, while defrauded investors suffer huge losses. In the future, deepfakes could also spread fake company reports or fictitious crises, thereby having a major impact on the market.
Love scammers are increasingly using deepfakes to gain trust on dating platforms and exploit victims financially. Instead of establishing interpersonal closeness, their goal is to extort money – often with videos or sounds that appear real. In the future, live deepfakes could increase the danger even further because they simulate real-time interactions.
Obvious “romance scam” warning signs include: requests for money or gifts, overly perfect platform profiles, constant availability, meetings that never happen, and flashy media content.
Researcher Julia Krickl says of the nefarious scams used by love fraudsters: “Criminals obtain fictitious identities in digital archives with hundreds of photos, backgrounds, and family stories that make a person appear credible.” Even through phone calls or video calls, criminals can now use fake identities to interact in a way that appears real.
In so-called CEO fraud, criminals pose as executives and put victims under pressure by referring to hierarchy, time pressure and secrecy to force quick transfers. With the help of deepfakes, deception reaches new dimensions: voices and faces are imitated in a realistic-looking way, so that a video or telephone call credibly strengthens the authority of a fake identity. Victims believe in strategic takeovers and large transfers, while behind the cover lies AI-generated fraud.
ZDFheute: How can technology help uncover deepfakes effectively?
Krikl: There are already several free tools that can recognize synthetic media – for example AI-generated images, videos or sound recordings. In Austria we are working on the research project “Defame Fakes”, which is developing better deepfake detection methods. It’s a race between fraud and technology: fraudsters are becoming more sophisticated, but detection tools are also improving.
Artificial intelligence manipulation reaches new dimensions: people appear to say or do things they have never said or done. How big is the danger?
April 13, 2021 | 30:33 min
ZDFheute: Do we also need better state protection mechanisms or are it only individuals who have to face this phenomenon?
Krikl: The fight against digital fraud should not be left to individuals alone. The Digital Services Act, which has been in force in all EU countries since 2024 and places greater obligations on platforms, comes into force in the European Union. Organizations like ÖIAT act as “trusted reporters” and can prioritize illegal content that should be promptly checked and removed. But enforcement remains problematic: platforms typically only act under duress. You earn income from every advertisement, including fake investment scams.
Interview conducted by Marcel Burkhardt.
