HomeAboutWriting & PublicationsSpeaking & PodcastBookContact
🇬🇧 EnglishExternal Publication

The Glaring Shadow of AI: When Our Shiny New Tool Turns Dark

Müge Yücel
The Glaring Shadow of AI: When Our Shiny New Tool Turns Dark

Dear Colleagues and Friends,

Remember last month, I was all fired up about the incredible potential of AI in Investor Relations? Still am, don't get me wrong! But like that classic yin yang symbol, where there's dazzling white, there's always a sliver of darkness. And lately, that darker side of AI, especially when it tangles with the murky world of manipulation – what they call "dark psychology" – has been keeping me up at night.

We're all human, right? We have our biases, our vulnerabilities, especially when it comes to something as personal and often taboo as our finances. That's where the real danger lies. While some folks in the financial world operate with empathy and integrity, others… well, others see those human weaknesses as an opportunity. Now, throw AI into that mix – a tool that can learn our triggers, mimic trusted voices, and spread information at lightning speed – and you've got a recipe for some serious trouble.

The shiny promises of AI efficiency and innovation can blind us to this looming threat. That's why I felt it was crucial to hit pause on the AI hype and shine a light on this potential dark side. Investors, regulators, and us, the IR pros on the front lines, need to understand these risks. We're talking about real financial harm, reputations going up in flames, and even the stability of the entire market wobbling.

The Creeping Rise of AI-Powered Financial Trickery

AI is already knee-deep in our financial markets. Algorithmic trading? Sentiment analysis? Predictive modeling? Old news. But what happens when people with less-than-pure intentions start wielding this power?

AI-Generated Fake News: The Ultimate Illusion: Deepfakes, AI-written press releases that sound completely genuine, and armies of synthetic social media accounts spreading carefully crafted lies… it's not a bad sci-fi flick anymore, it's a real and present danger. Imagine if this hit you at home:

· An AI spits out a flawless, fake earnings report, and suddenly our stock price tanks based on fabricated numbers.

· A deepfake from our CEO announces a bogus merger and sends investors into a buying frenzy based on a lie.

· AI-driven bots flood social media with fake news and subtly (or not so subtly) influence investor sentiment against us or in favor of a competitor, in a completely artificial way.

Remember that fake takeover bid we saw a while back – the one that triggered that crazy, short-lived stock jump? That was probably human-driven. Imagine if AI did that, but faster, smarter and much harder to track. That's scary. I already have goosebumps and I'm still not sure how to “know”.

Algorithmic Pump-and-Dump: The Silent Predator: AI can analyze market vulnerabilities, predict how retail or passive investors will react to certain triggers, and perform these classic pump-and-dump methods with frightening precision. It can see hype building up on social media, artificially fuel it, and then - boom! - the masterminds drop their shares and leave everyone else to fill the quickly escaping air.

Personalized Psychological Games: They know what makes us tick: This is where the "dark psychology" aspect really comes into play. AI not only calculates with numbers, it also learns how it can influence us. Just think of the classic manipulation tactics – gaslighting, exploiting our fears, making us feel like everyone else is doing it too (social proof). AI can automate this on a massive, personalized scale.

· Imagine AI chatbots designed to sound like a trusted financial advisor, subtly pushing investors into incredibly risky investments that profit the manipulators.

· Sentiment analysis could pinpoint investors who are feeling a little vulnerable or overly emotional and then target them with hard-hitting sales pitches, knowing exactly which buttons to push.

Dark Psychology: Our Human Weaknesses, Amplified by AI

The thought is disturbing, but AI can learn to exploit our inherent psychological biases with frightening efficiency.

Confirmation Bias: The Echo Chamber of Lies: AI algorithms can feed investors information that confirms what they already believe, even if it's completely false. If someone is stubbornly bullish on a failing company, the AI can compile a news feed that blocks out all the negative data and traps the investor in a bad investment based on a distorted reality.

FOMO on Steroids: The Artificial Hype Machine: Remember the hype around certain meme stocks or NFTs? AI-driven social media bots can create this artificial hype on demand, giving the impression that everyone is getting rich quick. Fear of missing out (FOMO) sets in and people rush into the stock without doing their due diligence - often at the peak of the bubble.

Abuse of trust: When Seeing is not Believing: Deepfake executives, AI-generated analyst reports that sound incredibly credible, fake endorsements from seemingly credible sources… AI can undermine the foundation of trust in our financial markets. If we can't tell the difference between what is real and what is synthetic, how can we have any confidence in the information we receive?

The Bull's Eye for Investor Relations: Our world in danger

The dark side of AI is not an abstract threat, but has a direct impact on our IR operations.

The Crushing Erosion of Trust: If investors can no longer trust our earnings announcements, press releases or even our financial reports, the entire market could descend into chaos. Large-scale AI-generated fraud could lead to widespread skepticism and make it incredibly difficult for legitimate companies like ours to communicate effectively and build real trust.

The Regulatory Catch-Up Game: Let's face it, current securities laws were not written with AI-driven fraud in mind. Regulators are smart, but they also need to catch up with deepfakes, algorithmic manipulation and synthetic media. The legal framework is simply not yet equipped for this new level of sophistication. What's more, the new tools need to find their playground before limits are placed on them.

The Wild Ride of increasing volatility: AI-powered misinformation can trigger these terrifying flash crashes, create irrational market bubbles that burst spectacularly, and trigger panic selling based on made-up news. We saw how automated systems can destabilize markets years ago with the "Flash Crash" – now imagine this being amplified by malicious AI.

The Nightmare of Legal and reputational consequences: If our company becomes the target of an AI-driven smear campaign, we face potential lawsuits, a plunge in our stock price and long-term damage to our brand. Proving our innocence in a world where AI can generate convincing fake evidence is a daunting prospect.

Let's Arm Ourselves Against the Darkness

Okay, I know this sounds a bit like a dystopian movie, but the risks are real. We can't just bury our heads in the sand and hope they go away. We, the IR professionals, need to be part of the solution. Here are some things that have been on my mind:

Becoming smart detectives: Recognizing AI fairy tales: We need to develop a keen eye for the telltale signs of AI-generated content. Things like:

Unnatural fluency: language that is too perfect and lacks the subtle nuances of human speech.

Inconsistencies: Minor visual or auditory glitches in videos, especially with faces and voices.

Lack of verifiable sources: Information that cannot be traced to credible origins.

Overly emotional or sensationalized language: content intended to trigger strong emotional reactions and bypass critical thinking.

Sudden, inexplicable spikes: In social media engagement or trading volumes that do not match real news.

Demand transparency from platforms: We need to push social media platforms and news outlets to invest in better AI detection tools and be more proactive in flagging potentially manipulated content.

Educate ourselves and our investors: We need to become more critical consumers of information and educate our investors about the potential risks. This includes alerting them to warning signs and encouraging them to verify information from multiple trusted sources.

Promote ethical AI use in our companies: As IR professionals, we can advocate for the responsible and transparent use of AI in our own companies. If we use AI in our communications, we should clearly disclose this.

The path to the future: vigilance is our shield

AI is a powerful force, and like any powerful force, it can be used for good or ill. The key lies in our awareness, our vigilance and our proactive measures. We need to work together across industries, with technology companies, with regulators and with each other to stay ahead of these evolving threats.

If we act now, we can reap the incredible benefits of AI without falling prey to its dark side. However, if we ignore these risks, we could be facing a future where trust erodes, markets become unpredictable and deception rules.

My final thought? Stay skeptical. Stay informed. Stay safe.

As AI evolves, so will the tactics of those seeking to exploit it. We, the IR experts, are on the front lines of this new battlefield. If we are not ready to try things out, we may miss opportunities but at them same time risks to mitigate.

Where are you in your AI Journey? What warning signs do you see? How are you preparing for this evolving landscape? Share your thoughts – let's work together to protect the integrity of our financial markets and arm ourselves with knowledge.🚀

Best, Muge

Your fellow IR Enthusiast!

Sources and related content

How to tell human voices from AI - Hume AI

AI & IR: The Rise of Artificial Intelligence in Investor Relations - EQS Group

How artificial intelligence is reshaping the financial services industry | EY - Greece

AI can now learn to influence human behaviour - CSIRO

Sentiment Analysis: Definition, Importance, Indicator Types, Benefits and Examples - Strike

Yücel, currently Director of Investor Relations and Sustainability at Galata Wind Enerji (GWIND.IS), initially began her investor relations career in 2008 at Dogus Otomotiv (DOAS.IS). She promotes proactive strategies utilizing digital technology and AI, and she specializes in shareholder targeting. Galata Wind , traded on the Istanbul Stock Exchange, operates wind and solar farms in Turkey, and plans further expansion into Europe, reaching a capacity of over 1000 MW by 2030.

Yücel has recently published "The Investor Relations Playbook - Achieving Sustainable Success", a hands-on guidebook on investor relations operations with templates, checklists and how-to guides. The book is available in print in Turkish and in digital form in English.

Share this article


The Glaring Shadow of AI: When Our Shiny New Tool Turns Dark | Müge Yücel