Next Story
Newszop

Microsoft reveals how AI tools have made e-commerce fraud, job scams and tech support frauds more dangerous

Send Push
Microsoft , in its latest Cyber Signals report, says that artificial intelligence has significantly lowered barriers for cybercriminals, enabling more sophisticated and convincing fraud schemes. Between April 2024 and April 2025, Microsoft thwarted $4 billion in fraud attempts, rejected 49,000 fraudulent partnership enrollments, and blocked approximately 1.6 million bot signup attempts per hour.

E-commerce fraud: AI creates convincing fake storefronts in minutes

AI tools now allow fraudsters to create convincing e-commerce websites in minutes rather than days or weeks. These sites feature AI-generated product descriptions, images, and fake customer reviews that mimic legitimate businesses. AI-powered customer service chatbots add another layer of deception, interacting with customers and stalling complaints with scripted excuses to delay chargebacks.

Microsoft reports that much of this AI-powered fraud originates from China and Germany, with the latter being targeted due to its status as one of the largest e-commerce markets in the European Union. To combat these threats, Microsoft has implemented fraud detection systems across its products, including Microsoft Defender for Cloud and Microsoft Edge, which features website typo protection and domain impersonation detection using deep learning technology.

Job scams : AI powers fake interviews and employment offers

Employment fraud has evolved with generative AI enabling scammers to create fake job listings, stolen credentials, and AI-powered email campaigns targeting job seekers. These scams often appear legitimate through AI-powered interviews and automated correspondence, making it increasingly difficult to identify fraudulent offers.

Warning signs include unsolicited job offers promising high pay for minimal qualifications, requests for personal information including bank details, and offers that seem too good to be true. Microsoft advises job seekers to verify employer legitimacy by cross-checking company details on official websites and platforms like LinkedIn , and to be wary of emails from free domains rather than official company email addresses.

Tech support fraud : AI enhances social engineering attacks

While some tech support scams don't yet leverage AI, Microsoft has observed financially motivated groups like Storm-1811 impersonating IT support through voice phishing to gain access to victims' devices through legitimate tools like Windows Quick Assist. AI tools can expedite the collection and organization of information about targeted victims to create more credible social engineering lures.

In response, Microsoft blocks an average of 4,415 suspicious Quick Assist connection attempts daily—approximately 5.46% of global connection attempts. The company has implemented warning messages in Quick Assist to alert users about possible scams before they grant access to their devices and developed a Digital Fingerprinting capability that leverages AI and machine learning to detect and prevent fraud.

Microsoft is taking a proactive approach to fraud prevention through its Secure Future Initiative. In January 2025, the company introduced a new policy requiring product teams to perform fraud prevention assessments and implement fraud controls as part of their design process. Microsoft has also joined the Global Anti-Scam Alliance to collaborate with governments, law enforcement, and other organizations to protect consumers from scams.
Loving Newspoint? Download the app now